Systems Concepts and Principles

Jump to: navigation, search


The Concept of a "System"

A "System" is any set of interrelated and interacting component parts that may be identified and distinguished from their environment or context.

System and Environment

Inherent in the notion of a "System" is the identification and cognitive selection (ie recognition) of the parts or components, and their interactions, that are counted as being "in the system" from the morass of (potential) components and interactions that form the rest of the universe. The former are labelled as "System Components" or "System Flows" while everything else is labelled as "Environment". This identification is a cognitive selection - people choose to see and think about things that way and this underpins the discourse about "The System". Therefore "System Conceptualisation" is a form of abstraction - of abstract thinking. The elements of the System are mentally identified for the cognitive and communicative purposes of the Systems Thinker - and all the "unnecessary" details of their real-world interactions and behaviours of those components are "left out". The System is abstracted out of the Environment.

However, while it is important to realise that the process of System Conceptualisation is a mental process that takes place in the thoughts and speech of the Systems Thinkers, it is also important to realise that the identification is of things in the "real-world". It is important to not confuse or conflate the actual system component with the idea of the system component in the system thinker's head. "The System" - its components and interactions - has an existence that is independent from any particular thinker's conception and in principle accessible to and common to all systems thinkers who choose to look (metaphorically) in the same way. But it is (should be) a clear concious choice to focus cognitive attention on the particular "The System" - as a route to understanding it and manipulating its behaviour - and to disregard (at least to some extent) the undifferentiated 'Environment' that is the context for the system.

Thus the process of Systems Conceptualisation above implies a Realist Philosophy - where the ideas talked about are understood to refer to things in an independent "real world". However, this does not imply the hard, reductive, materialist, physicalist position that became the caricature of positivist thinking during the later 20th Century. There are many things in the physical world that are non-material: Energy, Electric Fields, Magnetic Fields, Gravitational Fields, and so on - and even the material things do not seem to correspond to traditional philosophical notions of "substance". What are the properties of an electron - and more importantly why? What's more the traditional non-material or "mental" elements of traditional philosophical discourse are clearly parts or components in the real-world. A perception is a real thing; it has profound consequences - causal effects - on the behaviour of the thing doing the perceiving. Even an halloucination can be counted as "real" in some sense - it is just that in an halloucination the perceptions do not correspond very well with the objective reality; but this is the normal situation - our perceptions are selected for their evolutionary value not for their accuracy in creating a mental model of the world.

System Components, Interactions and Flows

System Structure and Dynamics

System Boundary

Systems Concepts

"Boundary Conditions"

The notion of "Boundary Conditions" is one that originates from the mathematical modelling of natural systems - as, for example, in Physics. In Physics the fundamental dynamics of the system are usually expressible in a mathematical model and it may be possible to specify an "Equation of Motion" - that describes how the state of the system changes over time. For example, an electrically resonant inductor-capacitor (LC) network can be described by an equation derived from Maxwell's equations (that describe the fundamental physics of electromagnetism) that describes how the voltages, charges and currents in the network change over time. Such an equation, however, is not a complete description (model) of the physical system; it does not specify the state of the system at any particular time. To get a more complete description it is necessary to specify the state of the system - as modelled - at a particular time. The inherent causation (model) in the equation then implicitly specifies the state of the system at all the other times - all one has to do is solve the equation and evaluate it at the value of the time variable required.

Boundary conditions, however, need not be temporal - they can be spatial too; for example the current at the tip of a radio antenna must be zero (at all times) because the electrical charges moving around in the antenna have nowhere to go at the tip - so they cannot flow past that point so the current (the flow of charges) must be zero. Boundary conditions also exist (logically) and apply in non-technical and non-physical systems. For example, if the Geriatric Hospital Care and Social Care systems were well-designed (in the UK) the rate of discharge of people from Hospital would match the rate of people entering Social Care (a boundary condition at the boundary between the two systems) and there would not be crises in capacity or funding for either system and a phenomenon of "bed blocking". That such phenomena occur is a straightforward causal consequence of the lack of Systems Thinking in the Department for Health and other Government departments.

System Behaviour

Inputs and Outputs

The PESTLE Checklist is a well known taxonomic checklist for the external (exogenous) factors influencing or being influenced by a system - or, in other words, the categories of inputs and outputs crossing the boundary between the system and its environment (or context).

Transform (Transforming Process / Function)

Control Inputs

Discrete, "Lumped", and Continuous Inputs and Outputs

Information Inputs and Outputs

Causal Inputs and Outputs

Feedback and Feedforward

"Feedback" and its converse, "Feedforward" are fundamental concepts in Systems Thinking. Feedback refers to where the system is so organised, arranged or structured such that information from the outputs of the system is used to change the inputs to the system; information is fed-back from outputs to inputs. Feedforward is the converse - where information about the inputs to the system is 'fed-forward' to alter the outputs of the system. Since the outputs are related to the inputs by the tansform function / operation / process the net effect is always to change the outputs of the system. Feedback is often used as the basis for what might be called a "reactive control (sub)system" that may be designed for the system whereas though much rarer in practice feedforward can be used for a "proactive control subsystem".

Balancing (Negative) and Reinforcing (Positive) Feedback

Balancing feedback is where a change in some output of the system leads to a change in the input to the system in the opposite direction, given the normal relationship between inputs and outputs for the system; an increase in output leads to a decrease in input and vice-versa. Hence there is logically a minus sign involved - a negative relation of proportionality - and balancing feedback is also called negative feedback. Perhaps the simplest example of balancing feedback is the simple pendulum; here a displacement to the left (or anticlockwise) results in a rightward (or clockwise) component of gravitational force that pushes the pendulum bob to the right, decreasing the displacement (and vice versa). Of course in reality such a displacement sets up a decaying oscillation of leftward and rightward displacements that persists for some time until the pendulum comes to an equilibrium state of being purely vertical. This is a general feature of all systems that involve negative feedback - an oscillation will be set up of some frequency and decay rate but eventually the system will return to a state of equilibrium. Hence negative feedback is associated with stable systems - systems that are robust in response to a "displacement" (or perturbation) and will return to a state of equilibrium in due course. However, negative feedback on its own is not sufficient to ensure stability; it is possible that the oscillations grow in magnitude rather than diminish and so lead to instability despite the negative feedback. This depends on the "Gain" in the system.

Reinforcing or positive feedback is the converse where a change in the output produces a change in the input in the same direction - which then feeds through the system to produce yet further change in the output in the same direction. Positive feedback is usually associated with instability and catastrophic (in a technical sense) change. Perhaps the simplest example of a system with positive feedback is the "ball-at-the-top-of-a-hill"; at the very top the hill is flat and the ball is in a stable position. However, if the ball is displaced onto the slope of the hill - which near the top may be a very slight slope - a component of gravitational force is generated that pushes the ball further away from the top of the hill (in the same direction as the initial displacement) - and the ball runs off down the hill and never returns to the top (until some radical intervention from outside the system occurs). The classic example of a non-technical positive feedback system is the "run-on-the-bank". Here, a decrease in confidence in some investors in the bank leads them to withdraw their funds and the observation of this withdrawal leads to a further decrease in confidence in investors and an increase in the number and speed of withdrawals. Again this positive feedback loop continues until some crisis point is reached (such as the bank having insufficient funds to meet its obligations) or some radical intervention (such as the bank being acquired by the Government) occurs to break the vicious cycle. Systems exhibiting positive feedback are often associated with "avalanche effects" where the system "spirals out of control" (rapidly and accelerating) until the crisis point is reached (or a radical intervention is made) and the dynamics of the system are radically changed (either by conscious design or by emergence of "higher-order" forces). Nuclear weapons are positive feedback systems; and the crisis is a very big bang, a massive release of energy that destroys everything around it. However, such crises are usually painful and can be life-threatening and are avoided by good regulation (or governor-ence). Positive feedback is always dangerous but not always a bad thing. It can be used to create systems with fast dynamics - but such systems need to be very carefully designed because of the high risk of instability, catastrophic change and a loss of control.

Stability and Instability

Stability - or Instability - is a property of a system that describes its response to "perturbatory" (disturbing) inputs (or input changes). Stability is where the system has the property of returning to some equilibrium state at some time after the occurrence of a perturbation at the system inputs, possibly after several oscillations about the equilibrium state. An example of system designed for stability is the suspension system of a car - intended to produce a "stable ride" and vehicle dynamics. Instability is the converse whereby the system does not return to any equilibrium state and this may be characterised by growing oscillations in the system until it is driven to the System Limits. A classic example of such instability is the "feedback howl" of a public address system with the gain turned up too high. Approaching the System Limits often involves some sort of "crisis" for the system and a radical change in the dynamics (or indeed the structure) of the system. Stable systems are characterised by balancing (negative) feedback and high dampening; maybe as part of some designed feedback control (sub)system. Unstable systems are characterised by reinforcing (positive) feedback and possibly high gain. While generally speaking stability is a good thing and instability a bad thing, instability is often associated with fast dynamics and therefore may achieve higher performance - but at the risk of a loss of control (uncontrolled or uncontrollable variations in both inputs and outputs) - hence sometimes systems are deliberately designed to be unstable in the "classical" sense - and alternative means from classical control theory used to achieve some measure of control (or at least "influence") over the system behaviour.

Whereas feedback is often used to produce stability in a reactive (feedback) control (sub)system, feedback with delay in a system with people as components is a well-known recipe for producing instability. The Beer Game is a management game (for education/instruction purposes) intended to demonstrate this production of instability in the context of a hypothetical supply chain (or network) in a competitive (retail) market. Social or sociotechnical systems designed by politicians, civil servants and managers uneducated in Systems Thinking often exhibit this sort of instability - leading to recurring crises. The Senge version of System Dynamics identifies a number of System Archetypes (or Enterprise Architecture patterns and anti-patterns) some of which feature delayed feedback and instability.

Oscillation and Dampening

"Oscillation" is the periodic variation in the system behaviour and (one of) its output(s). Oscillation is possible in any system exhibiting balancing feedback; the feedback creates a "restorative force" that pushes the system back towards an equilibrium state - but the system overshoots and an oscillation ensues. An oscillation may be triggered (provoked or driven) by either an oscillatory input or by some sort of "impulse" (short-lived change in the input). Perhaps the simplest and most familiar oscillating system is the spung weight - a weight on the end of a spring. Any system capable of oscillation will have a "characteristic frequency" that depends on the structure of the system and the response of the component parts. How the system behaves as the frequency of an oscillatory input is varied is called the system's "frequency response". A theoretically ideal system with no "dissipative" mechanism would, in principle, go on oscillating forever and the energy of ghe oscillation would be conserved in the system but practically all real-world systems (technical or social) do have dissipative mechanisms and the energy is lost from the system. As the energy is dissipated the osciallation decays (reduces in amplitude). The collective effect of the dissipative mechanisms is called the natural "dampening" of the system - and the level of dampening governs the rate at which the oscillations decay and the speed with which the system returns to an equilibrium state. Dampening may be augmented or designed-in in designed systems in order to reduce oscillation (or 'ringing') and/or make the system return to an equilibrium faster. When the dampening is arranged such that the system returns to (an) equilibrium within a quarter of a wavelength without overshooting it is called "critical dampening".

Relaxation Time and Natural Frequency

The relaxation time is a characteristic property of the system that is the time-domain converse of the natural frequency of the system. The relaxation time is the time it takes for the system to return to its normal, quiescent state following a disturbance or perturbation to its inputs. This must be clearly distinguished from the system lag or latency - which is the time it takes for a change in the inputs or the information, materials or energy to transit the system from input to output - or the cycle time which is the lag plus the time it takes for the signal to propagate back to the input from the output through the feedback loop. The time-domain notion of "relaxation time" is a part of the description of the system that characterises the response of the system to an impulse or "step-change" at the input. According to Fourier Theory such an impulse can be thought of as the infinite sum of Fourier harmonics and the time-domain and frequency domain pictures are equivalent alternative descriptions of the system response. The 'natural frequency' of the system is the base frequency of this Fourier series. Resonance may occur when the frequency of change (or one of its significant components) at the inputs matches the harmonics for the system's natural frequency.


A system is "controllable" if it is possible to adjust or fix the inputs to the system in order to produce the wanted or desired output(s). Controllability is therefore a property of quality of some systems; and the act or process of "control" is one of fixing or changing the (control) inputs (if possible). Obviously, since the objective of "control" is to determine not the inputs but the outputs of the system the practical abililty or possibility of control requires a good understanding of the operation of the system - particularly a knowledge of the transfer function that relates the outputs to the inputs. Hence good control requires a good model of the system - the Conant-Ashby Theorem - whether that model be an explicit technical model embedded in the technology of the system or a mental model in head of a manager who is a part of the controlling subsystem.

Some systems are not controllable or have limited controllability. Systems where the outputs can vary wildly when small changes are made to the input - and where it is difficult, if not impossible, to delineate the transfer function are the basis of (uncontrollable) chaotic behaviour; such systems may be labelled "chaotic". This not (necessarily) related to the structural complexity of the system; some very simple systems can exhibit chaotic behaviour - such as the steel pendulum over a magnet. Another class of systems that have limited controllability is those with lots of emergent behaviours, large numbers and strengths of subsystem interactions (couplings) and complex, non-linear transfer functions (relating inputs to outputs). Such systems are labelled "complex systems" - and there are good reasons to think that most if not all organisations (of people and technology) are complex systems. The Cynefin framework is a taxonomy of systems that positions systems (or systems of systems) in the Chaotic or Complex categories alongside "Simple" and "Complicated" - and may be regarded as categorising the behaviours and relationships between inpoputs and outputs of systems. [Simple is where there is a simple linear (causal and nomological) relationship between inputs and outputs; and "Complicated" is where there is a relationship compounded from several (or many) simple relationships.

"Command and Control" refers to a particular autocratic and bureaucratic organisational culture and style of management. It assumes that the the systems in an enterprise are simple or complicated (and therefore controllable in-principle). Unfortunately with modern information availability and communications methods, organisations can no longer be made artifically simple or complicated be restricting and controlling the flow of information and as systems they are naturally complex; the assumption of controllability is increasingly less valid. Hence a "command and control" cullture is increasingly ineffective in managing large enterprises (while retaining an appropriate level of performance). This related to the problem of "managerial hubris" - managers falsely assume that they can and do have sufficient knowledge of the system transfer functions in their enterprise (without doing analysis or attempting to make their models explicit) that they can control it. Unfortunately this is the dominant paradigm for politicians, civil servants and uneducated senior managers; it originates in Steam Age thinking (which assumed, falsely, that systems were either simple or complicated) and persists inappropriately and anachronistically in the 21st century. This is why a new paradigm founded in Systems Thinking needs to be much more widely taught and adopted.



"Sensor", "Effector" and "Regulator"

A "sensor" is a subsystem that detects and measures either one or some of the outputs, or operating parameters, of the system. Sensors are often deployed on the outputs of a system as the first stage of a feedback control (sub)system where their role is to generate the feedback information. But they may also be deployed on the inputs as the first part of a feedforward control (subsystem). An example of the latter might be a pressure sensor in the turbo of a turbocharged engine - which feeds forward information via the Engine Control Unit (ECU) about the air input to the engine to the fuel subsystem (to ensure fuel supply is correctly adjusted for the engine power demanded. Sensors need not be technical subsystems of technical systems; a marketing department, for example, can be considered the sensor subsystem of a company or firm. The 'collective noun' for sensors is "instrumentation" - the collection of subsystems designed to measure and report (information) on the state and performance of the system, its inputs and outputs.

An "effector" is the converse of a sensor: its job is not to detect and measure an input or an output - but to change it. Typically an effector adjusts the input or output in accordance with some (input) control signal (information). Examples of effectors include the electrically operated valves in central heating systems and the injectors in fuel-injected automotive engines (which operate in accordance with the control signals from the ECU). Again effectors need not be technical subsystems - the category can include people in a first-line management role in social systems; a project manager may be considered an "effector".

A "regulator" is a subsystem that regulates (or "controls") a system. It comprises one or more sensors (on the inputs or the outputs), one or more effectors (on the inputs or, less likely, the outputs) and a model (at least a partial model) of the system being regulated. Like effectors and sensors, regulators may be implemented with a range of technologies or none if "manual" regulation is implemented by people in social or sociotechnical systems, or indeed if regulation is a "natural" process in an organic system. The model in a reguator is in a very real sense an "information model" and information processing subsystem that describes the controlled system. If traditional analogue technologies then the techniques and methods of analogue computing may be used to implement the model and the regulator.

The classic example of an analogue regulator is the "Centrifugal Governor" used in a feedback control system to control the speed of a steam engine. Here the speed of the engine is 'encoded' onto a vertically oriented shaft (the sensor is a gearing mechanism) which in turn rotates a pair of weights constrained by a lever mechanism. The centrifugal force causes the changes in rotational speed to be converted into changes in height and a linear encoding of the change of speed of the engine. Hence the governor mechanism is a very simple analogue computer performing one calculation on the change in speed of the engine. The linear mechanical motion is then transmitted through mechanical linkages to an inlet valve (the effector) that controls the amount of steam entering the engine's piston - closing the feedback control loop. It is so arranged that increases in rotational speed cause decreases in the amount of steam entering the piston - balancing feedback - and the net effect is one of homeostasis with respect to the speed of the engine. [This also implies that both Systems Thinking and Computing started, at least as heuristic craft disciplines, in England's 19th Century manufacturing industry; nothing to do with the universities.] Another 'classic' example of a regulator based on analogue technology is the thermostat based on the bimetallic strip - which is used to regulate the temperature of a heating system by turning on and off a "heat generator" (or "boiler" in a water-based heating system) in order to regulate the temperature of some system.

Modern technical regulators make use of digital technology. There is a network of sensors monitoring various parameters of the controlled system, and producing a set of digital signals and a network of effectors that can change some parameters of the controlled system at the "control inputs" - which are again digital signals. The regulator itself then uses digital computing to "calculate" the appropriate set of signals to send to the effectors given the set of signals coming from the sensors (or "instrumentation"). In this sense the regulator is a digital signal processor - and the technology of digital signal processing, such as "Programmable Interface Controllers (PICs)" may be used to embed the regulator subsystem in the wider system. As digital memory has become very inexpensive it has become possible to embed the regulator's model in digital numerical form into the regulator hardware; such models can be very extensive and sophisticated. It is no longer necessary that the system models be expressible in simple nomological form (let alone "linear") in order to be implemented in analogue computing technology; they can model sophisticated, highly contingent and complex system behaviours. When the network interconnecting sensors and effectors with the regulator uses the Internet Protocols, you potentially have an "Internet-of-Things" system. This is the era of "software-controlled" or "software-defined" systems; even if most software developers neglect to develop the appropriate instrumentation to make their software systems controllable. Perhaps the best example of modern digital regulators are the engine management and traction control (sub)systems in modern cars whose purpose is to guarantee the stability of the vehicle on journeys by compensating for the cognitive inadequacies and bandwidth deficit of the driver (or, equivalently, make things easier and simpler for the driver).

As with sensors and effectors, regulators need not be technology-based (though they are usually 'technical' for some meaning of the word 'technical'). The Budgeting and Accounting (sub)systems in a company may be considered the "Spend Control Regulator" - a feedback control (sub)system of the firm. Its purpose is to control the usage, or at least the rate of usage, of the firm's financial resouces. Here the model to which the regulator embeds and embodies is partly in the mental models of the accounting and spend-authorising staff and partly in the firms rules and procedures for the release of financial resources. Government "regulators" are bodies of people and technology whose purpose is to regulate (oversee and control) industries to ensure their stability and the correct, fair operation of their markets. One of the significant problems with human-based regulators (operating with vague mental models) is that of bounded rationality and "cognitive capture". For example, financial management staff can become so obsessed with controlling the money that they forget the purpose of the larger system of which the financial subsystem is a part. This has happened in several UK hospitals. Another problem is "weak regulation" - where the regulator has insufficient power and/or and inaccurate model to properly regulate (control) the industry; it can be argued that this was (and still is) the case with the banking industry (both retail and investment) and was the ultimate cause of the 2008 crash.

Loose-, Tight- and "Loose-Tight" Coupling


Resonance is a phenomenon, concept and term from Physics where it refers to the phenomenon of the transfer of energy between discrete systems that have some degree of "coupling" between them. The classic "problem" used for instruction purposes in Physics is that of two pendulums coupled by a soft spring (which may be nothing more than a common wire from which both pendulums hang). A more real example of resonance is radio wave transmission or reception in an antenna (whose very structure creates a coupling between electromagnetic waves and alternating currents and voltages).

The more general concept of "resonance" in Systems Thinking refers not to the transfer of energy driving the behaviours of the two systems, but to the stimulation of behaviours in one system by behaviours in the other system through the transfer of anything - energy, entropy, information, ideas, materials,... - through some sort of coupling between the two systems. When resonance occurs, however, it may involve or provoke large magnitude transfers of energy, entropy, information etc. between the two systems - intended or not.

In recent years the term "resonance" has entered popular discourse with reference to ideas; people say things like "that resonates with me" when they mean some idea chimes with their experience or knowledge. In this sense the two systems involved are people, and the idea described stimulates ideas in the receiving person and the thing transferred between the two systems is information (a description of an idea). This usage is aligned to Systems Thinking and may be appropriate for social systems. Used in the context of organisational cybernetics it could be called "organisational resonance".

"Equilibrium" and "Non-Equilibrium"

Dynamic Equilibrium and "Steady-State"


Stocks and Flows

Queueing Systems

Cycle Time

Relaxation Time

Linear and Non-Linear

Superposition and Composability

Structural Coupling


"Homeostasis" is a phenomenon, concept and term that comes from Biology where it was observed that living systems operate (behave) so as to maintain and sustain the operation of the system (to sustain its own life).In living systems this involves regulating some 'parameter' of the system to within (viable, acceptable) 'norms' characterising the system in 'normal' operation; examples include body temperature, blood pressure, blood-sugar levels, levels of energy and nutrition etc. In fact, life itself can be regarded as an entropy maintaining / sustaining operation where low-entropy matter is ingested as food and high-entropy material is excreted as waste. Systems Thinking generalises the concept of "homeostasis" to non-living as well as living systems and its meaning also slightly generalises to mean the maintenance / sustainment of any parameter or behavior of the system (or its outputs) within a narrow range around some nominal norm value. The 'classic' example of a technical homeostatic system is that of a thermostat-controlled heating system used to maintain the internal temperature of a building in a narrow range where people feel comfortable. An example of a homeostatic social system might be the medical education system in the UK - which is supposed to maintain the number of General Practicitioners per head (of population) within certain acceptable limits. Unfortunately this system is failing since too few medical students are entering General Practice to compensate for older doctors retiring at present; homeostasis in GP density is not being maintained by the medical education system. It cannot be sustained without the 'import' of doctors from outside the UK - and so the system fails to be truly homeostatic.


System Collections

Composition (and Decomposition), Recursion, Stratfication and Fractality

Systems or System Collections Taxonomy

Simple or Simplicity

Complicated or Complication

Complex or Complexity

Chaos or Chaotic

Churchman's Conditions for Purposeful Systems

Ulrich's Source of Influence


(Reductive) Decomposition

(Holistic) Composition


"Entropy" is a concept and a term developed in 19th Century Physics - it was introduced into Thermodynamics as an abstract, theoretical hypothesized physical quantity to help explain the behaviours and efficiencies of steam engines and related mechanisms. The 'real' explanation and conception (ie understanding) of Entropy was, however, only produced later with the development of the theory of 'Statistical Mechanics' - ie the explanation of the behaviours of gases based on the statics of the motions of their constituent atoms (or molecules). In popular science literature entropy is often 'explained' as some sort of "degree of disorder". But this explanation is somewhat mysterious, misleading and unhelpful - it has epistemic connotations (disorder is in the eye of the beholder) and is difficult to apply in a precise but non-technical way to help understand some system. There are much better explanations / conceptions from a Systems Thinking perspective; they relate more to the Statistical Mechanics point of view and they start with the observation that entropy is a perfectly well-defined, non-mysterious physical quantity and property of an assembly or aggregation of "micro-systems". There are other, more familiar physical quantities, such as Pressure and Temperature related to some statistic of the assembly - e.g. Temperature is related to the mean kinetic energy of the microsystems (atoms or molecules) - and entropy is just another; nothing mysterious about it except that it is not one we can sense easily (as easily as temperature or pressure).

In Thermodynamics and Statistical Mechanics, the formula for calculating the entropy of a collection of gaseous molecules (or atoms) is <math> S = k log &Omega </math> where k is Boltzmann's constant and <math>&Omega</math> is a measure of the "relative state-space spread density". That last phrase needs a little "unpacking" - which is done below. First however, we must observe that such a simple formula is possible in Physics because the microsystems (atoms/molecules) themselves are extrinsically (from the outside) simple (or, at least, they are modelled as simple particles in Thermodynamics and Statistical Mechanics) and the statistics governing their state occupation well-known (the Boltzmann statistics).

In Systems Thinking things are not so simple and straightforward and calculating the entropy of a System-of-Systems is not so easy - which is why it is rarely done. Nevertheless the "conceptual recipe" for calculating the entropy of any System-of-Systems (SoS) remains the same (as in Physics):

  • The SoS is analytically decomposed through as many levels as required to arrive at the atomic (in the sense of smallest, indivisible unit - not necessarily real atoms) microsystems.
  • The possible (potential) discrete states of the microsystems are identified - that are consistent with the overall state of the SoS.
  • The way that the microsystems actually occupy the potential states - how they occupy the state-space; what the statistics are - for a given state of the SoS is determined. [And this may not be expressible in an algebraic mathematical model - unlike gasses and steam engines.]
  • You can then determine the "relative state space density" of the microsystems for a SoS in any particular state - ie the ratio of states occupied to equivalent potential states (combinations of the states of the microsystems).
  • Taking the logarithm (ie expressing the ratio in deciBels) and multiplying by Boltzmann's constant gives you the quantified entropy of the SoS or spread of microsystems (for each potential overall state of the SoS). For SoS's comprising non-simple microsystems this is quite unlikely to be expressible as a simple formula; the Physics of gasses is a very, very special case.

This conceptual recipe inherits the fundamental limitations of statistical thinking; it works best when there are large numbers (hundreds or thousands or higher powers of ten) of identical microsystems. At the other end of the scale (numbers roughly less than 10 and a high-level of qualitative dissimilarity between the subsystems) the SoS decomposition can be handled with discrete mathematics and modelling techniques. In the intermediate range it is more difficult - but computer aided modelling and simulation can be used to handle such situations to a certain extent. This applies to modelling of enterprises - and this is the basis of Real Enterprise Architecture - but it is made even more difficult because enterprises are complex and adaptive and incorporate intelligent components - meaning they can change their structural composition according to the states they are in; so there is nothing remotely as simple as the Boltzmann statistics that can be applied to ICASOS's or "Enterprises" - they are just not like that very, very special case. In this sense Enterprise Architecture tackles a harder problem than theoretical physics (of the 19th and 20th centuries) - but is able to do so because of modern computer (modelling and simulation) technology.

Citing the work of Ilya Prigogine, Ludwig Von Bertalanffy discusses some of the issues of entropy and thermodynamics as applied to "Open Systems" in his 1968 book entitled "General System Theory". This demonstrates that the concept of Entropy has been included in the general field of Systems Thinking for approximately 50 years now (in 2017). It also exemplifies the central thesis of General System Theory - that there are general Systems Principles (or Theories) that transcend their application in any special case of systems (like the systems of Physics). Von Bertalanffy discusses "equilibrium" as the system (or SoS) state of minimum entropy production and suggests that the Second Law of Thermodynamics may not apply to Open Systems that are not in a state of equilibrium. STREAMS considers that Von Bertalanffy is simply wrong about this; the concept of entropy is still reasonably well-defined even for macro-level systems (or SoSs) that are far from an equilibrium state; it is still fundamentally the ratio of occupied states to potential states for the constituent microsystems. The question is similar to that of asking the temperature of an electric heating coil shortly after the current is turned on; the temperature (or entropy) is still well-defined and in-principle measurable even while it is shifting (changing) rapidly and it is hard to calculate or measure the instantaneous value until an equilibrium state is approached and the rate of change slows. Hence entropy is a physical quantity even for open systems - and the Second Law applies (just as all the Laws of Physics apply at the micro level - even if they may not be so useful at the macro level). Despite the confusion of Biologists (and others) the Second Law - and the concept of Entropy - applies to all systems including Open Systems and Living Systems and even Social Systems. [In fact, it is instructive to consider Living Systems as thermodynamic engines that are more about the homeostatic maintenance of local entropy levels than the 'consumption' of energy - since it is not clear what it means to "consume energy" (which is conserved).]

Finally, Von Bertalanffy makes the link between Thermodynamics and Information Theory by identifying Entropy as "negative information" - or, equivalently, information as "negative entropy" or negentropy and hints at the later 20th century ideas of the genetic code as the basis for evolution (or adaptation), encoding information about the environment (or context) and "epigenetics" - organisms reflexively modifying their environment. All these ideas lead naturally into an evolutionary perspective in general complex adaptive systems that may be applied beyond the fields of Physics and Biology and in partiular in the social (and managerial) systems of Government and private sector organisations. [As Laszlo and Laszlo do in their conceptualisation of "Evolutionary Management".]

"Entropy" is, therefore, a fundamental concept in the theoretical framework of Systems Thinking - even if one that is not so useful as a diagnostic tool for application to Enterprises.

Analysis and Synthesis

Top-Down, Bottom-Up, Middle-Out, Outside-In and Inside-Out

One-Up, One-Down or System, Wider System, Context

Causality or Causes

"Law" or "Nomological Relation"

The "Reference Model"

System Lifecycle

Engineered and Natural Systems

Social and Technical Systems

Socio-Technical Systems

Hard, Soft and Coercive Systems

Systems Principles

The Darkness Principle

No system can be known completely.

The Black Box Principle

It is not necessary to know a system completely in order to determine the function of the system. This is obviously related to the POSIWID Principle - the Purpose Of a System Is What It Does - and amounts to an imperative to take a "Functional Stance" based on observation (empirical evidence) when conducting Systems Analysis.

The Laws and Theorems of Systems Thinking

Ashby's Law or The Law of Requisite Variety

There are various formulations of Ashby's Law but the central idea is that any (good) model of a system (bearing in mind that a "model" may be an analogous physical system) must have approximately the same or greater number of states as the system modelled. The number of states may be called the "Variety" over which the system ranges. Wikipedia discusses Variety and the Law of Requisite Variety here.

There are a number of consequences, implications and lemmas that follow on from Ashby's Law. The first of these is that if the model is used as the basis of a regulator (or control subsystem) for the system then the possibility of control is limited by the 1) variety of the regulator and 2) the information bandwidth (or channel capacity) between controlled (or regulated) system and the regulator. This immediately leads to the Conant-Ashby Theorem below. A system that is much simpler (in the sense of having fewer possible states) cannot control a more complicated or complex (in the sense of having many more possible states) system.

The Conant-Ashby Theorem

Every good regulator of a system must be (or contain) a model of the system.

The Feedback Dominance Theorem

The feedback dominance theorem says that in systems exhibiting high gain - where the change in outputs is very much larger than the corresponding and causative change in the inputs - the behaviour of the system will come to be dominated by the feedback. This is obvious in the case where the feedback is positive and "in-phase" where it becomes re-inforcing and drives the system towards it limits. This leads to the well-known howl of electrically amplified acoutstic feedback. It is not so obvious where the feedback has a long lag and the feedback is not in-phase with the 'normal' inputs. Feedback dominance may lead to "avalanche" or "snowball" behaviour and drives systems into crisis - which provokes a radical change in the behaviour of the system - possibly through the intervention of forces not usually or normally included in the system conceptualisation.

The Redundancy Of Potential Command Principle

Warren McCulloch proposed this principle as necessary for resilient purposeful systems. It refers to the property of some systems where one component of the system has the potential to take on the role of 'Controller' or 'Regulator' for the system (for whatever reasons) and achieve the purposes of the system - including maintaining he homeostatic state of the system.

The Relaxation Time Principle

The relaxation time principle says that a system cannot return to its equilibrium state if the interval between disturbances (ie the time-domain converse of the frequency of disturbances) from equilibrium is shorter than the relaxation time of the system. This is obvious given the definitions above - but it is not so simple as it first appears. If the inputs and their variation to the system are continuous this makes the connection to traditional signal processing and classical control theory with its Nyquist Theorem and Bode Plots. If the inputs are discrete and at intervals then statistical methods must be used and this makes the connection to Queueing Theory and Operational Research. A batch system is a system where discrete inputs (and outputs) occur at regular periodic intervals. In this scenario the relaxation time and the relaxation time principle take on a statistical character and you are talking about average times and normal deviations (in accordance with the statistics pertaining to the system). Any such system which spends a lot of time in a default, quiescent state is potentially inefficient in that it has resources and processes not engaged in producing outputs for extended periods.

The Four-Phase Model (of System Behaviour)

Many systems - across a range of domains - exhibit a common pattern of 'behavioural evolution' as their inputs change. This pattern is characterised by four 'phases' or 'modes' of system behaviour: 1) Linear 2) Non-Linear 3) Cascade 4) Catastrophic. In the first phase the (changes in the) outputs of the system are linearly related to the (changes in the output). Many systems approximate to such linear systems when the magnitude or frequency of the input changes is low. In the non-linear phase the linear relationship between inputs and outputs breaks down - and this can lead to control problems and inappropriate responses. In the cascade phase the system response itself starts to trigger changes in inputs and outputs and an avalanche of change can occur. The catastrophic phase is marked by radical changes in the system behaviour that may amount to a change of Transfer Function (a radical change in the relationship between inputs and outputs), a significant shift away from 'normal' behaviour or structural change in the nature of the system. This may be irrecoverable - and example might be an overloaded voltage regulator which overheats, melts and becomes a fuse or resistor instead of a voltage regulator. Perhaps the best example of cascade behaviour is the phenomenon of cascade failures in electrical power supply grids. There are many examples of liner and non-linear systems behaviours. The 'size' of each of the phases - in terms of the range of input variation, states and system behaviours -is highly dependent on the nature and construction of the system - and it entirely possible that the non-linear and cascade phases are so small as to be practically non-existent.

Real Enterprise Architecture Principles

Management Science principles

The Principles of Evolutionary Management

The Laszlos in their book " The Insight Edge - An Introduction To The Theory and Practice of Evolutionary Management" identify "Evolutionary Management" as a new paradigm (that supersedes what they call the neoclassical paradigm) of management thinking based on systems concepts and the view of the enterprise (or organisation) as an intelligent complex adaptive system of systems (ICASOS).

   The paradigm notion for understanding the functioning of the enterprise is 
   the open system. These are the systems that have a throughput of energy, 
   matter, and/or information. The concept applies to the enterprise. This is 
   a system that takes in information, energy, raw materials, and transforms 
   them into commercially valuable products and services.

Laszlo and Laszlo, 1997

They define 18 "Principles of Evolutionary Management" many of which draw on the concepts of Systems Thinking - or suggest the application of Systems Thinking in a particular manner for the social systems of enterprises.


STREAMS Main Page Systems Thinking Real Enterprise Architecture Management Science Main Page#Indexes / Bibliography