06- Operations Management

Aviram (Avi) Vijh
10 min readFeb 28, 2021

--

This article summarises handy concepts and useful models related to operations management as part of my MS program at the University of York.

What is operations management?

The management of processes that create value for a company and its customers. These processes transform inputs, such as materials, energy, services, and people, into outputs, such as goods and services.

One way of helping understand what is or is not an operational function within an organisation is to think in terms of input-transformation-output. All operations create and deliver products or services by changing inputs into outputs by a process of transformation. This could be by processing raw materials into finished products, or by transforming a client brief into a finished advertising campaign. Operations conform to this general model. However, what counts as the inputs, the transformation, and the outputs varies significantly between organisations, and between business processes within organisations.

Process Hierarchy

The order in which the different process/events must take place is known as the process hierarchy. The role of a project manager is critical for helping ascertain and follow a process hierarchy.

Operations Strategy

The coming together of strategic management and operations.

Four perspectives on operations strategy

Diamond Model

The ‘diamond’ model, a way to distinguish between projects according to their relative novelty, technology, complexity and pace based on the work of Aaron Shenhar and Dov Dvir.

Diamond Model of classifying projects

Triple bottom line

A way to judge performance across three levels: operation, strategic, and societal. One common term that tries to capture the idea of a broader approach to assessing an organization’s performance is the ‘triple bottom line’ (TBL, or 3BL), also known as ‘people, plant and profit’.

Top down and bottom up strategies

They can work in a way that they reinforce each other.

Product/service life cycle and operations performance objectives

The effects of the product/service life cycle on operations performance objectives

Managerial framework for digital innovation strategy

Innovation, Design & Creativity

The relationship between creativity, innovation, and design

Innovation S-curve

When new ideas are introduced in services, products or processes, they rarely have an impact that increases uniformly over time. Usually performance follows an S-shaped progression. So, in the early stages of the introduction of new ideas, although (often large) amounts of resources, time and effort are needed to introduce the idea, relatively small performance improvements are experienced. However, with time, as experience and knowledge about the new idea grow, performance increases. But as the idea becomes established, extending its performance further becomes increasingly difficult. But when one idea reaches its mature, ‘levelling-off’ period, it is vulnerable to a further new idea being introduced which, in turn, moves through its own S-shaped progression. This is how innovation works, the limits of one idea being reached which prompts a newer, better idea, with each new S-curve requiring some degree of redesign

S-curve of innovation

Design funnel

The design funnel — progressively reducing the number of possibilities until the final design is reached

Outsourcing vs. Offshoring

Two supply network strategies that are often confused are those of outsourcing and offshoring. Outsourcing means deciding to buy in products or services rather than perform the activities in-house. Offshoring means obtaining products and services from operations that are based outside one’s own country. Of course, one may both outsource and offshore as illustrated in the figure below 5.12 . Offshoring is very closely related to outsourcing and the motives for each may be similar. Offshoring to a lower cost region of the world is usually done to reduce an operation’s overall costs as is outsourcing to a supplier which has greater expertise or scale, or both.

The decision logic of outsourcing
Offshoring and outsourcing are related but different

Process design & Product/Service design

Throughput time, cycle time and work-in-progress

Throughput time is the elapsed time between an item entering the process and leaving it; cycle time is the average time between items being processed; and work-in-progress is the number of items within the process at any point in time. Cycle Time is the amount of time a team spends actually working on producing an item, up until the product is ready for shipment. It is the time it takes to complete one task.

Little’s law states that Throughput time = Work-in-progress * Cycle time.

Throughput efficiency

This idea that the throughput time of a process is different from the work content of whatever it is processing has important implications. What it means is that for significant amounts of time no useful work is being done to the materials, information or customers that are progressing through the process.

The Allen curve

Arranging the facilities in any workplace will directly influence how physically close individuals are to each other. And this, in turn, influences the likelihood of communication between individuals. So, what effect does placing individuals close together or far apart have on how they interact? The work of Thomas J. Allen at the Massachusetts Institute of Technology first established how communication dropped off with distance. In 1984 his book, Managing the Flow of Technology , presented what has become known as the ‘Allen curve’. It showed a powerful negative correlation between the physical distance between colleagues and their frequency of communication. The Allen curve estimated that we are four times as likely to communicate regularly with a colleague sitting 2 metres away from us as with someone 20 metres away, and 50 metres (for example, separate floors) marks a cut-off point for the regular exchange of certain types of technical information. But, as some experts have pointed out, the office is no longer just a physical place; email, remote conferencing and collaboration tools mean that colleagues can communicate without ever seeing each other. However, this appears not to be the case. One study 12 shows that so-called distance-shrinking technology actually makes close proximity more important, with both face-to-face and digital communications following the Allen curve. The study showed that engineers who shared a physical office were 20 per cent more likely to stay in touch digitally than those who worked elsewhere. Also, when they needed to collaborate closely, closely located colleagues emailed each other four times as frequently as colleagues in different locations.

Resource and process ‘distance’

The degree of difficulty in the implementation of process technology will depend on the degree of novelty of the new technology resources and the changes required in the operation’s processes. The less that the new technology resources are understood (influenced perhaps by the degree of innovation), the greater their ‘distance’ from the current technology resource base of the operation. Similarly, the extent to which an implementation requires an operation to modify its existing processes, the greater the ‘process distance’. The greater the resource and process distance, the more difficult any implementation is likely to be. This is because such distance makes it difficult to adopt a systematic approach to analysing change and learning from mistakes.

Learning potential depends on both technological resource and process ‘distance’

Job enlargement vs. enrichment

Drum, buffer, rope

The drum, buffer, rope concept comes from the theory of constraints (TOC) and a concept called optimized production technology (OPT), originally described by Eli Goldratt in his novel The Goal. It is an idea that helps to decide exactly where in a process control should occur.

Goldratt argued that the bottleneck in the process should be the control point of the whole process. It is called the drum because it sets the ‘beat’ for the rest of the process to follow.

Therefore, it is sensible to keep a buffer of inventory in front of it to make sure that it always has something to work on. Because it constrains the output of the whole process, any time lost at the bottleneck will affect the output from the whole process. So it is not worthwhile for the parts of the process before the bottleneck to work to their full capacity. All they would do is produce work which would accumulate further along in the process up to the point where the bottleneck is constraining the flow. Therefore, some form of communication between the bottleneck and the input to the process is needed to make sure that activities before the bottleneck do not overproduce. This is called the rope.

Base operating capacity

Setting the base capacity The base capacity of an organisation is important as this is the level around which capacity is altered slightly up or down in response to demand or forecasting. It should ideally be more or less correct for the majority of the time, where correct is sufficient for the normal demand on that operation. So the two important questions are: how do you determine what the base capacity is? how do we make sure we can alter the capacity around that level?

There are three factors that are thought to influence what the base capacity should be:

  • The relative significance to the organisation of the operation’s successful performance.
  • Is this really important to our organisation? The perishability of the operation’s outputs. Does it matter if the output is stored periodically?
  • The volatility in demand or supply.
The base level of capacity should reflect the relative importance of the operation’s

Capacity constraints

Many organizations operate at below their maximum processing capacity, either because there is insufficient demand completely to ‘fill’ their capacity, or as a deliberate policy, so that the operation can respond quickly to every new order. Often, though, organizations find themselves with some parts of their operation operating below their capacity while other parts are at their capacity ‘ceiling’. It is the parts of the operation that are operating at their capacity ‘ceiling’ which are the capacity constraint for the whole operation.

Development of ERP

The development of ERP

Why quality matters?

Higher quality has a beneficial effect on both revenues and costs
The customer’s domain and the operations domain in determining the perceived quality

Complexity theory

Early systems theorists and early proponents of cybernetics believed that if you knew all the parts of a system (all the different parts of an operation: machines, materials, people, etc), and you knew how all the parts interact (the relationship between those parts, e.g., person X operates machine Y that processes material Z), then you could predict how that system would behave. Unfortunately for all but the most simple systems, this does not seem always to be the case. Systems exhibit emergent behaviours; those are behaviours that you cannot predict from knowledge of all the parts and their interactions. It is this problem, these unintended behaviours or outcomes, that complexity theory seeks to address.

A very simple example is the Bullwhip effect that can produce volatile behaviours in supply chains. There is a supply chain game based on the distribution of beer that demonstrates this effect (the effect was first observed by P&G in a nappy (diaper) supply chain). Complexity theorists build tools (models and simulations) that try to help us understand what sort of emergent properties a system might have so they can be accounted for. Many of the problems in the P&G supply chain were due to the system (the supply chain) (over)reacting to small perturbations in demand forecasting creating volatility.

Deciding on your sourcing strategy

Using a bicycle manufacturer as an example:

Lean Operations — Discovering problems

Reducing the level of inventory (water) allows operations management (the ship) to see the problems in the operation (the rocks) and work to reduce them.

Failure management

How failure is managed depends on its likelihood of occurrence and its negative consequences

Sources of failure in OM

The sources of potential failure in operations

Failure Mode & Effects

Having identified potential sources of failure (either in advance of an event or through post-failure analysis) and having then examined the likelihood of these failures occurring through some combination of objective and subjective analysis, managers can move to assigning relative priorities to risk. The most well-known approach for doing this is failure mode and effect analysis (FMEA). Its objective is to identify the factors that are critical to various types of failure as a means of identifying failures before they happen. It does this by providing a ‘checklist’ procedure built around three key questions for each possible cause of failure:

  • What is the likelihood that failure will occur?
  • What would the consequence of the failure be?
  • How likely is such a failure to be detected before it affects the customer?

Recommended text:

Operations Management (8th edition) — Nigel Slack, Alistair Brandon-Jones, Robert Johnston

Unlisted

--

--

Aviram (Avi) Vijh

Chief Design Officer. Key interests include design management, usability, service design & product innovation.