Featured

Asset Performance Management through Life

Abstract

The use of the terms Reliability (R) and Maintainability (M) are now in vogue. Does improvement in Reliability and Maintainability (R&M) relate to Maintenance Optimisation or Asset Optimisation? In this paper we would examine whether such a relationship exists. We would also explore whether the DOM (Design Out Maintenance) strategy can be effectively employed in improving Reliability and Maintainability (R&M) of an industrial facility or engineered systems in order to achieve the twin objectives of profitability and sustainability. 

Introduction:

First let us state the definitions of Reliability and Maintainability. Then let us expand on what is actually meant by Maintenance or Asset Optimisation. 

Reliability — “The probability that an item will perform a required function, under a stated condition for a stated period of time.” 

Reliability is therefore the extension of quality into the time domain and may be paraphrased as the ‘probability of a non-failure in a given period of time. 

Maintainability — “The probability of repair in a given time. 
Expanding on that, Maintainability means the probability that a failed item will be restored to operational effectiveness within a given period of time when a repair action is performed in accordance to prescribed procedures. 

Maintenance Optimisation or Asset Optimisation: 

The idea of an optimised maintenance program suggests that an adequate mix of maintenance strategies and actions needs to be formulated and fine tuned in order to improve uptime and extend the total life cycle of the physical asset and assure safe working conditions while bearing in mind limiting maintenance budgets and environmental legislations. This does not seem to be straightforward and may require a holistic view. Therefore, a maintenance concept for each installation or factory it is necessary to plan, control and improve the various maintenance strategies, actions and policies as applied to that installation within the given constraints of time, manpower, skills, kowledge, budgets and legislations. 

A maintenance concept or strategy may in the long term even become a guiding philosophy for a facility to performing maintenance/engineering. In some case, advanced maintenance strategies are almost considered policies on their own. What is certain is that maintenance strategies determine the business philosophy concerning maintenance and engineering and they are needed to manage the complexity of maintenance per se. In practice, it is clear that more and more companies are spending time and effort determining the right maintenance concept and strategies applicable in their context. 

Maintenance Strategies

The usual mix of maintenance strategies that are used for Maintenance/Asset Optimisation are RTF(Run to Failure), TBM (Time Based Maintenance), UBM (User Based Maintenance; also known as Opportunity Maintenance), CBM (Condition Based Maintenance), E-Maintenance and DOM (Design Out Maintenance). 

At present, RTF, TBM, UBM, CBM and E-Maintenance accept the inherent reliability of the physical asset, which they intend to maintain as a given fact. The governing concept is that once a machine is designed, manufactured and installed the upper limits of reliability and maintainability are fixed and can not be improved upon during the operational stage. This is true to a great extent. However, such implicit acceptance effectively limits the upper limits of productivity, performance and profitability of a manufacturing facility. Hence, maintenance, as usually practiced, will fail to sync with the constant market pressure of improving productivity, performance, profitability and sustainability. Failure to do so can often force a company out of business or settle for lower profits till new or additional equipment are purchased to meet desired business goals. Clearly, this is a costly proposition even for cash rich companies. 

The alternative lies in innovating or making greater use of DOM (Design Out Maintenance) strategy on a given set of physical assets considering the operating context of a facility. DOM — instead of considering a system as given, looks at the possible changes (usually small innovations) or possible measures needed to avoid or minimise maintenance in the first place. Adopting a DOM policy implies that maintenance is proactively involved at different stages of an equipment life cycle to solve problems of failures or solve problems that prevents an organisation to achieve its business goals. This may be either done at the procurement stage or after the installation stage, when a machine is in operation. Therefore, it is prudent to apply the DOM strategy both at the procurement stage and at the operating stage to get the best benefits. 

Ideally DOM strategy intends to completely avoid or minimise maintenance throughout the operating life of equipment. Though it may appear on the surface to be unrealistic it is completely possible to do so. One approach will be to consider a diverse set of maintenance requirements at the early stages of equipment design during the procurement process based on available knowledge of potential failure patterns and problems studied against business requirements. The other approach will be to consider the behaviour of the equipment during the operating stage and eliminate, avoid or minimise possibilities of failures through simple modifications/innovations on existing equipment. 

As a consequence, equipment modifications along with process modifications and modification of maintenance processes are geared either at increasing reliability by raising the MTBF (Mean Time Between Failures) or improving maintainability by lowering MTTR (Mean Time To Repair). This may be done in various ways. In some situations, both MTBF and MTTR are to be addressed simultaneously. Per se DOM aims to improve the following:  
1. Equipment Availability by extending the MFOL (Mean Free Operating Life), 

2. Production Capacity by minimising unplanned downtime, 

3. Safety (by eliminating the consequences of failures and reducing failure rates), 

4. Extend the total life cycle of an equipment (by using an equipment for the maximum possible years)

5. Life Cycle Costs (LCC) by minimising maintenance costs 

6. Sustainability through optimised use of resources to run the system at the best operating condition. 

In all the above cases it is imperative to lower or effectively contain the failure rate, potential hazards of an equipment and minimise loss or excess use of resources. 

Modifications (usually small innovations), which lie at the heart of DOM may include the following (not an exclusive list): 

  1. change of dimensions and material flows
  2. change of material 
  3. change of condition of surfaces, structures and interfaces
  4. change of ergonomics 
  5. change of maintenance process, planning and procedures
  6. change of designs, controls and knowledge
  7. change of items and parts, lubricants and redundancy 
  8. change through effective scaling 
  9. change in use of resources 
  10. change in set ups, speeds, and operational processes
  11. change of environment, reactions and interactions
  12. change of thermal and energy flows

However, adoption of DOM strategy does not exclude application of other available strategies in the whole process of improvement. Judicious application of all available strategies is often necessary to achieve the business goals of a manufacturing facility for which a completely new process has been developed, which critically focuses on improving Reliability and Maintainability of a facility. Finally, it depends on the how the DOM projects are formulated, implemented and managed across a manufacturing facility. Here agility and constancy of purpose are two critical management factors that would determine the quality of the results — profitability and sustainability, which effectively translates to lowering the Total Cost of Ownership of facility, which is the essence of Asset or Maintenance Optimisation

Design DOM Innovation maintainability reliability

A Movement towards RCM

29th December 2017, Kolkata

On 29th December 1978, F. Stanley Nowlan, Howard F. Heap, in their seminal work Reliability Centered Maintenance, revealed the fallacy of the two basic principles adopted by traditional PM (Preventive Maintenance) programs – a concept that started from World War II:

  •  A strong correlation exists between equipment age and failure rate. Older the equipment higher must be the failure rate.
  •  Individual component and equipment probability of failure can be determined statistically, and therefore components can be replaced or refurbished prior to failure.

However, the first person to reveal the fallacy was Waddington who conducted his research during World War II on British fighter planes. He found that failure rate of fighter planes always increased immediately upon time-based preventive maintenance, which for the fighter planes was scheduled after every 60 hours of operation or flying time.

By the 1980s, alternatives to traditional Preventive Maintenance (PM) programs began to migrate to the maintenance arena. While computer power first supported interval-based maintenance by specifying failure probabilities, continued advances in the 1990s began to change maintenance practices yet again. The development of affordable microprocessors and increased computer literacy in the workforce made it possible to improve upon interval-based maintenance techniques by distinguishing other equipment failure characteristics like a pattern of randomness exhibited by most failures. These included the precursors of failure, quantified equipment condition, and improved repair scheduling.

The emergence of new maintenance techniques called Condition Monitoring (CdM) or Condition-based Maintenance (CBM) supported the findings of Waddington, Nowlan and Heap.

Subsequently, industry emphasis on CBM increased, and the reliance upon PM decreased. However, CBM should not replace all time-based maintenance. Time-based or interval based maintenance is still appropriate for those failure cases, exhibiting a distinct time-based pattern (generally dominated by wear phenomena) where an abrasive, erosive, or corrosive wear takes place; or when material properties change due to fatigue, embrittlement, or similar processes. In short, PM (Time based or interval based maintenance) is still applicable when a clear correlation between age and functional reliability exists.

While many industrial organizations were expanding PM efforts to nearly all other assets, the airline industry, led by the efforts of Nowlan and Heap, took a different approach and developed a maintenance process based on system functions, the consequence of failure, and failure modes. Their work led to the development of Reliability-Centered Maintenance, first published on 29th December 1978 and sponsored by the Office of the Assistant Secretary of Defense (Manpower, Reserve Affairs, and Logistics). Additional independent studies confirmed their findings.

In 1982 the United States Navy expanded the scope of RCM beyond aircraft and addressed more down-to-earth equipment. These studies noted a difference existed between the perceived and intrinsic design life for the majority of equipment and components. For example, the intrinsic design life of anti-friction bearings is taken to be five years or two years. But as perceived in industries life of anti-friction bearings usually exhibit randomness over a large range. In most cases, bearings exhibit a life which either greatly exceeded the perceived or stated design life or fall short of the stated design life. Clearly in such cases, doing time directed interval-based preventive maintenance is neither effective (initiating unnecessarily forced outage) nor cost-effective.

The process of determining the difference between perceived and intrinsic design life is known as Age Exploration (AE). AE was used by the U.S. Submarine Force in the early 1970s to extend the time between periodic overhauls and to replace time-based tasks with condition-based tasks. The initial program was limited to Fleet Ballistic Missile submarines. The use of AE was expanded continually until it included all submarines, aircraft carriers, other major combatants, and ships of the Military Sealift Command. The Navy stated the requirements of RCM and Condition-based Monitoring as part of the design specifications.

Continual development of relatively affordable test equipment and computerized maintenance management software (CMMS like MIMIC developed by WM Engineering of the University of Manchester) during the1990s till date has made it possible to:

  •  Determine the actual condition of equipment without relying on traditional techniques which base the probability of failure on age and appearance instead of the actual condition of an equipment or item.
  •  Track and analyze equipment history as a means of determining failure patterns and life-cycle cost.

    RCM has long been accepted by the aircraft industry, the spacecraft industry, the nuclear industry, and the Department of Defense (DoD), but is a relatively new way of approaching maintenance for the majority of facilities outside of these four areas. The benefits of an RCM approach far exceed those of any one type of maintenance program.

    Fortunately, RCM was applied in India for a few Indian manufacturing Industries from 1990 onwards with relatively great success. I am particularly happy to have been involved in development and application of RCM in Indian industries, which has continually evolved in terms of techniques and method of application to meet contextual industrial needs.

    I am also happy to report that RCM for industrial use has now reached a mature stage of its development, which can be replicated for any manufacturing industry.

    I am of the opinion that this maturity would provide the necessary stepping stone to develop Industry 4.0 and develop meaningful IOT applications for manufacturing industries.

    Wish RCM a very happy birthday!

    by

    Dibyendu De

Note on Raised “Noise Floor”

In a spectrum, if the entire noise floor is raised, it is possible that we have a situation of extreme bearing wear.

If the noise is biased towards the higher frequencies in the spectrum then we may have process or flow problem like possible cavitation, which may be further confirmed by high acceleration measurement (or filtered acceleration measurement) on the pump body on the delivery side (since high frequency waves are always localized).

Smaller “humps” may be due to resonance (possibly excited by anti-friction bearing damage, cavitation, looseness, rubs or impacts) or closely spaced sidebands arising from other defects. A high resolution measurement (or graphical zoom and a log scale) may reveal whether the source is problems that exhibit sidebands or a problem of resonance. If  machine speed can be changed, (for e.g.motor connected to VFD drives) the resonant frequency would not move – but the other peaks would. Sidebands will typically be symmetrical around a dominant peak – e.g. 1X, 2X, 2x LF (100 or 120 Hz) etc indicating different faults.

Interestingly, the time waveform would reveal the reason as to why the noise floor has been raised.

We would see signs of looseness, severe bearing wear, rubs, and other sources of impacts in the time waveform. We must make sure that there are 5 – 10 seconds of time waveform if we suspect an intermittent rub (e.g. white metal bearings of vertical pumps or loose electrical connection of motor terminals) or if we suspect flow turbulence or cavitation.

If the time waveform looks normal (making sure there is a high Fmax (following Niquist criteria) and we view the waveform in units of acceleration then increase the resolution in the spectrum to 3200 lines or higher in case we are seeing a family of sidebands (like the sidebands we find around gear mesh frequency or rotor bars).

But if a natural frequency is being excited (necessary condition for resonance) then we have to perform a bump/impact test or a run-up/coast down test to confirm the situation.

Fretting Corrosion

In a plant it so happened that a machine with its shaft and pulley assembly was kept idle for little over three years.

Then one day the engineers decided to run the machine. After two months of running, the pulley came loose on the shaft and started rattling – making just enough noise for the operator to notice it and promptly stop the machine thus averting a nasty accident.
This is a case of fretting corrosion. This happens when things are kept in assembled condition for long without running or components are assembled loosely. The asperities at the contact surface that help to hold two components together are lost; thus loosing the vital grip forcing the components to come loose. This wear process is accelerated in presence of low frequency vibration that usually travel to such joints por assemblies from other running machines. The confirmation of fretting corrosion lies in observing reddish coloured powder in between the closely fitting joint interfaces and assemblies.

The pictures of fretting corrosion as seen in this case are the following:

Ways to manage this failure mode:

1. Take care to assemble correctly

2. Don’t leave a machine idle for a long time.

3. Prevent, as far as possible, low frequency vibrations to travel to a machine.

4. If an idle machine is to be commissioned then take care to inspect the joints and interfaces and replace assemblies as found necessary.

5. May be monitored by Wear Debris Analysis for lubricated joints and interfaces and by vibration monitoring for dry joints and interfaces or simply by visual monitoring.

The Case of Burning BagHouse Filters

Recently I was invited to investigate a case of frequent burning of baghouse filter bags.

There were five such baghouses connected to five furnaces of a steel plant.

The client reasoned that the material of the bags was not suitable for the temperature of the gas it handled. However, with change of material the frequency of bag burning did not change. So it needed a different approach to home onto the reasons for the failures.

Hence, this is how I went about solving the case:

First I did a Weibull analysis of the failures. Engineers use Weibull distribution to quickly find out the failure pattern of a system. Once such a pattern is obtained an engineer can then go deeper in studying the probability distribution function (pdf). Such a pdf provides an engineer with many important clues. The most important clue it provides is the reason for such repeated failures, which are broadly classified as follows:

  1. Design related causes
  2. Operation and Maintenance related causes
  3. Age related causes.

In this case it turned out to be a combination of Design and Age related causes.

It was a vital clue that then guided me to look deeper to isolate the design and age related factors affecting the system.

I then did a modified FMEA (Failure Mode and Effect Analysis) for the two causes.

The FMEA revealed many inherent imperfections that were related to either design or aging.

Broadly, the causes were:

  1. Inability of the FD cooler (Forced Draft cooler) to take out excess heat up to the design limit before allowing the hot gas to enter the bag house.
  2. Inappropriate sequence of cleaning of the bag filters. It was out of sync with the operational sequence thus allowing relatively hot dust to build up on the surface of the bags.

Next, the maintenance plan was reviewed. The method used was Review of Equipment Maintenance (REM). The goal of such a review is to find maintenance tasks that are either missing or redundant for which new tasks are either added/deleted or modified. With such modification of the maintenance plan the aim is to achieve a balance between tasks that help find out incipient signals of deterioration and tasks that would help maintain longevity and stability of the system for a desired period of time.

Finally the investigation was wrapped up by formulating the Task Implementation Plan (TIP). It comprised of 13 broad tasks that were then broken up into more than 100 sub-tasks with scheduled dates for completion and accountability.

 

Observing Complexity

To me, observing real life systems is something like this:

A real life System comprises of a meaningful set of objects, diverse in form, state and function but inter-related through multiple network of interdependencies through mutual feedbacks enclosed by variable space, operating far from its equilibrium conditions not only exchanging energy and matter with its environment but also generating internal entropy to undergo discrete transformation triggered by the Arrow of Time forcing it to behave in a dissipative but self organizing manner to either self destruct itself in a wide variety of ways or create new possibilities in performance and/or behaviour owing to presence of ‘attractors’ and ‘bifurcations’; thereby making it impossible to predict the future behaviour of the system in the long term or trace the previous states of the system with any high degree of accuracy other than express it in terms of probabilities since only the present state of the system might be observable to a certain extent and only a probabilistic understanding may be formulated as to how it has arrived at its present state and what would keep it going, thus triggering creative human responses to manage, maintain and enhance the system conditions, function and purpose and create superior systems of the future for the benefit of the society at large.

Such a representation of an observation looks quite involved. Perhaps it might be stated in a much simpler way. Most real life systems behave in a complex manner creating multitude of problems of performance and failures. But how do we get rid of complexity and uncertainty as exhibited by systems? We may do so by deeply observing the complex behaviour of the system to improve our perception to gain insights about the essence of the system; find out the underlying ‘imperfection’ that causes the apparent complexity and uncertainty and then find ways to improve the existing system or create new system and maintain them in the simplest possible manner. We do this by applying the principles of chaos, reliability and design. Surprisingly, the same process might be used to troubleshoot and solve problems we face on a daily basis. If done, we are no longer dominated or dictated by the ‘special whims’ of the system.

The crux of the matter is how we observe reality and understand it so as to make meaningful choices as responses to life and living.

Creativity in Solving Complex Problems

The other day, at the end of my seminar on “Solving Complex Engineering Problems” a delegate asked me as to whether the entire process of solving complex problems can be automated in some way by means of a software instead of relying on human creativity.

Such a response wasn’t unexpected. In the corporate world the word “creativity” is often looked at with suspicion. They would rather prefer structured and standard approaches like “brainstorming” at 10.00 am sharp or team work or collaborative effort, which in my opinion do little to help anyone solve complex problems or even address complex problems correctly.

That might be the single most important reason why “complex problems” remain unresolved for years affecting profitability and long term sustenance of an organization. Failing to resolve complex problems for years often earns such problems the sobriquet of “wicked problems”, which means that such problems are too tough for “any expert” to come to grips with.

What they sadly miss out is the role of creativity in solving complex problems, which no automation or technology can ever replicate. They miss this because most organizations systemically smother or mercilessly boot out any remnant of creativity in their people since they think that it is always easier to control and manage a regimented workforce devoid of even elementary traces of creativity.

So, is managing creativity and creative people a messy affair? On the surface it seems so. This is simply because we generally have a vague idea of what drives, inspires and really sustains creativity?

Creativity is not about wearing hair long or wearing weird clothes, singing strange tunes, coming to office late and being rude to bosses for no apparent reasons. These things hardly make anyone creative or help anyone become a more creative person.

Actually, things like “being attentive and aware”, “sensitive”, “passionate”, “concerned”, “committed” and above all “inventive” just might be the necessary ingredients to drive, inspire and sustain creativity.

Why?

Though there are many ways of describing and defining creativity what I like best is – “creativity is the expression of one’s understanding and expression of oneself” – deeper the understanding better the expression of creativity.

When we look at creativity in this manner it is obvious that we are all creative though the expression and its fidelity might vary to a great extent. Clearly, some are simply better than others.

Further, if creativity may be thought about as a process, then the inputs and the clarity of understanding of ourselves are more valuable elements of the system than the outputs that the process anyway consistently churns out (remember the uncountable hours we spent in organization meeting, discussing and brainstorming to solve complex problems).

In these days of economic depressions, organizations can really do themselves a huge favor if only they pay more attention to facilitating such inputs to people rather than get overtly worried about control and management by conformity.

Expert Knowledge is Passé; Long Live Masters!

Engaging with flow, created by any phenomenon, is an essential step that we take to create something new, which invariably amounts to an interpretation of our environment or surrounding triggered by noticing something from the higher levels of the mind that is less dependent on sensory inputs.

Why is this necessary?

Since our mind is a system consisting of complex networks it has memory like all other networks. Memory would then compel the network (our mind) to behave in very predictable patterns i.e. it would continue to behave the way it does unless the energy of the system is changed by design. It would mean that our response to any situation would stay the same unless we add new energy to our existing network urging it be respond or behave differently.

That is the basic idea of engaging with the flow — to add new energy to our neural network to come up with a different response to a situation we are facing in the moment.

But that is tricky business. Much more tricky than we might care to imagine. It is because we must notice in quick succession (almost as quick as clearly noticing a ten digit telephone number) for our neurons to get energized enough to rise above their critical threshold limit to create harmonious oscillations, helping us to create new knowledge and response. Fortunately, our neurons, under this situation of noticing different aspects of a phenomenon in quick succession, produce different frequencies from moment to moment, which helps to create new responses. However, to produce useful and new harmonious frequencies our mind also needs to be supported by a healthy relaxation oscillations. Relaxation oscillations help us absorb new learning. Relaxation oscillation in the brain is something like this — neurons slowly absorb energy and then quickly release the energy. This new release of energy helps neurons to jump over their critical threshold limit to create harmonious oscillations.

Let us understand this process by some live examples.

For example, Sachin Tendulkar is considered the ‘god’ of cricket. For him, captains and bowlers of rival teams have a hard time setting a field to hold him down. He always tends to find the gaps too easily against any type of bowling. It is easy to imagine that he is quickly noticing so many aspects of the phenomenon — the bowler, his run up and stance, his delivery, speed of the ball, trajectory of the ball, movement of fielders, etc in quick succession (really quick since the ball is traveling at a speed of nearly 100 km/hour). Within that time he decides where and how to place the ball to get runs, which is invariably between the gaps in the fielding.

Or take Ravi Shankar, the great musician, who plays so intuitively. To me intuition is nothing but the same process as described above, where new harmonic oscillations are produced with the help of relaxation oscillations.

Or say Michel Angelo who saw entrapped figures trapped in uncut stones waiting to be freed by his hands.

There is one thing that is common to all of them which sets them apart from the rest. They all intuitively find the gaps or the existing imperfections in the present moment with their uninhabited awareness to reach their goal. This is because all human minds by default are goal oriented since human consciousness is more temporal than spatial. They improvise their games based on those gaps or existing imperfections in the most intuitive way — no copy book styles for them. They have learned the rules of their games so well that they now break them with impunity by mastering the way to trigger relaxation oscillations at will. This process of engagement is played over and over in whatever game masters choose to play. Games differ but the process of engagement does not.

This is what innovation, improvisation, improvement, creating new knowledge is all about.

The Japanese have a name for it. They call it Wabi -Sabi, which means understand the imperfection in a given situation and improve upon it to make it stronger and more reliable.

The Chinese have a name for it. They call it Shan Zhai, which originally means balancing numerous resistances, see what is possible to be done cheap and effectively, start small and then grow in strength.

The Indians have a name for it. They call it Juggad, which means understand what is to be done, start with whatever is available at hand, go with the flow and build up over time.

How would this be useful in present times?

Today, expert knowledge (essentially a knowledge bank) is sold in the market as a commodity that is continually being sold at lesser and lesser price wiping out premiums that they once commanded. It is so since expert knowledge is increasingly being converted to cheap ordinary stuff through algorithms. In some fields of human activity the value of expert knowledge is almost zero — given freely over the internet. Then how are we to survive in the present situation. It definitely calls for a new skill – the skill of mastery, where new knowledge can be created moment to moment. This amounts to present moment responses to a changing situation. People who can really do that are priceless and can still command a premium in today’s market place.

Such skill of mastery basically calls us to be in touch with one’s essential nature. Gregory Bateson reminds us of this fact when he said, “When man lost touch with nature, he lost touch with himself.” Simply stated, “losing touch with himself” is disengagement – a phenomenon that is so common in our professional world.

This is the only way to create a good sustainable future for all since, “The future is never empty, never a blank space to be filled with the output of human activity. It is already colonized by what the past and present have sent to it.” (Fry 1999)

How do we develop that is the question? Understanding that involves deep learning. And deep learning is done by power of engaging with the flow of the moment.

State a Problem in Simplest Terms

The other day, a long-term client of mine called me up to see a problem of theirs. Since it is a public sector organization they soon sent me a RFP (request for proposal) over email with a fairly detailed SOW (Scope of Work).

In the SOW, they mentioned all that needs to be done, almost breaking down each step. In short, they were proposing a detailed method to solve their problem.

When they followed me up over phone, I said, “With such a detailed methodology in place, why would you ever need me?”

Sensing that they did not get it, I elaborated, “Does it mean if you just have the results of those steps that you have listed out you get to the answer you are looking for? Do you think that such detailed investigations, which you have already carried out earlier, would inflate costs without getting anywhere close to the solution?”

Fortunately, they quickly realized the gap. They asked me, “What should we do then?”

I replied, “State your problem in its most basic and simplest terms. Complex, nagging problems can’t be neatly defined. Instead, you could just state your concern about the problem. That would trigger our collective minds to flow easily to reach a solution. For example, you could state that you are suffering from a headache every evening.”

“O.K.” they said, “Our issue is that we are responding as per prescribed textbook rules to solve this problem and the problem seems to be temporarily fixed but it resurfaces after some time.”

“Just state that. And then we follow the cues to get to a working hypothesis, a working methodology to test out the hypothesis, collect data, arrange and interpret the data, collectively understand the issue with the simplest theories that fit the facts of the problem, formulate practical actions, carry out those to test our hypothesis and learn more to eliminate the problem for good.”

In the next fifteen minutes they sent the revised RFP, stating their concern.

By stating a problem in its basic and simplest terms we allow our minds to pay attention to flow effortlessly towards a solution.

That is what is needed when we tackle complex problems – problems for which answers are not available in the books or can’t be googled.

Developing Non-linear Thinking Skills

We know while comprehending complexity, linear logic fails. That appears to the most important reason as to why most people find it difficult to understand complex situations or grapple with complex problems.

With simple linear logic, principles come first and deductions follow. Hence the process may be described as:

Observe -> Model the observations based on relevant domain theory -> apply Principles/mathematics -> Deduction

Fair to say that this standard approach, based on linear logic, is used in science and engineering to solve linear problems. Since this is an efficient way of thinking it dominates our educational, professional and social lives.

But when it comes to solving non-linear complex problems (unfortunately most life problems are non-linear) application of linear logic fails. Instead what is needed is the development of non-linear thinking skills.

In fact, non-linear thinking style is a necessary skill with the larger theoretical framework of digital literacy through multiple format known as transliteracy through transmedia learning environment.

Nonlinear thinking styles are defined as using intuition, insight, creativity and emotions when comprehending and communicating information (Vance, Groves, Paik and Kindler, 2007)

But how to develop non-linear thinking skills?

I would give below a 3 step approach, one of the many approaches I developed for the specific purpose of developing non-linear thinking skills of my adult professional students. This specific technique is christened as the Fugue technique.

1. Think in terms of fugue. In a fugue, all the notes cannot be constrained into a single melodic scale. Compressing everything into one single melodic scale is analogous to modelling a phenomenon or behaviour based on a high level of abstraction, which is the dominant characteristic of linear thinking style. Make this clear to the participating group. It would relieve them of the unnecessary stress of finding the “one right answer” or “one right approach” to a complex problem.

2. Bring people together to tackle a complex problem. Make sure that the participants are familiar with the problem. This means that complex problems are to be selected from the familiar working environment of the participants or problems the participants have grappled with but failed to find a solution.

Putting a number of people together gives us a big advantage. Different people see the same problem in different ways. It would depend on their specific strengths and mental makeup, tendency and practice. Some find some parts of the problem easy to see and understand, which others might find too difficult to even notice. Each member of the group is then encouraged to focus on some parts of the problem that comes easily to them so as to come up with their own unique perspectives and understanding.

Before allowing people to jump in, preferably use different media to present a problem — narrative, story telling, printed material, videos, pictures, data, internet references etc.

3. Invite the group to plunge directly into the midst of things and follow the temporal order created by the thoughts of the different group members. Build upon each others thoughts. Never mind if we get different strings of thoughts to build different lines of thinking, which is the most desired output. Encourage all forms of communication — dialogs, debates, discussions, collaboration, negotiation, etc. Be patient with the flow of time. Activities might show sudden bursts of energy at various points of time. Allow people to express their thoughts through different media – verbal, slide shows, discussions, drawings, doodles, story telling, narratives/presentations, logical interpretation through principles, etc. It is expected that each member communicates in his/her preferred style of communication.

Link the different strings of thoughts or different perspectives to make a collective but coherent understanding of a complex problem without attempting to put them into “one melodic scale.” It means that it is not necessary to align the different perspectives into one linear path. Multiple paths are encouraged. Expecting multiple solutions would be the norm. The output measured against time is exponential when compared to linear approach. It helps in increasing both depth and width of learning. In Nemetic terms the resultant ecology is known as nemePlx or nPx

When a group performs this exercise on many live problems over a span of few days (a four-day long session appears to be just enough), it propels the students to develop their non-linear thinking skills. It also develops their transliteracy skills (a non-linear thinking skill) immersed in transmedia learning environment.

Note:

1. This Fugue technique has been extensively used for Power Plant professionals solving their complex problems.

2. The author is of the opinion that non-linear thinking skills cannot be taught in any explicit manner.

References:

!. Digital Literacy: A Demand for Nonlinear Thinking Styles Mark Osterman, Thomas G. Reio, Jr., and M. O. Thirunarayanan Florida International University, USA http://digitalcommons.fiu.edu/cgi/viewcontent.cgi?article=1321&context=sferc

2. Now you see it: How the brain science of attention will transform the way we live, work, and learn Davidson, C., (2011). New York, NY: Penguin