Structure of a 2 day workshop on RCM

Day 1 
Session 1 – Introduction to RCM, History and 7 Questions
* Definition of Reliability, RCM and the 7 Vital Questions
* Maintenance Strategies
* Waddington Effect
* Nowlan & Heap’s Failure Patterns
* Inherent Reliability and its improvement strategy
Session 2 — Operating Context and Functions 
* Introduction to Operating Context
* Operating Context for a System
* Elements to be included
* Operating Context and Functions
* 5 general operating context
* Operating Context and Functional Failures
Session 3 – Failure Modes and Failure Effects  
* Introduction to Failure Modes
* Few thoughts about data
* Exploring Failure Modes
* 4 Rules for Physical Failure Modes
* Failure Effect
* Evidence that failure is occurring
Session 4 — Failure Consequence and Risk 
* Introduction to Decision Diagram
* Risk assessment — how each failure matter
* Is the function hidden or Evident
* Relation of time and Hidden vs Evident
* Safety and Environmental Consequences
* Operational and non-operational Consequences
Day 2 
Session 5 — Strategies and Proactive Tasks 
* Introduction to Proactive Tasks and PF interval
* CBM/On-condition tasks
* Scheduled Restoration and Scheduled Discard Tasks
* Determining Task Effectiveness
* Risk and Tolerability
* General Rules for following the decision diagram
Session 6 — Default Actions 
* Introduction to Default Actions
* Default tasks for hidden failures
* Failure Finding Task
* Failure finding Interval
* Design Out Maintenance — to do or to be
* Walk around checks with right timing
Session 7 — RCM Audits 
* Introduction to Audits
* Fundamental of Technical Audit
* Technical Audit process
* Fundamentals of Management Audit
* General Management Audit process
* What RCM achieves
Session 8 — Setting up a Successful Living Program 
* Using the power of facilitated group
* RCM Training
* Knowledge development and its process
* Failure Modes and Design Maturity
* RCM during scale up or expansion
* Summary and Conclusion
Advertisements

The Sad Story of the HFO pump

This is a HFO (Heavy Fuel Oil) screw pump used in Power Plant for running boilers. There was a catastrophic failure of the pump. Though this pump was regularly monitored by vibration (in velocity mode — mm/sec) it didn’t give any indication of the impending failure.

The screws of the pump rubbed against each other and the case hardened layers of both screws were crushed. The force was so great that the body of the pump also cracked. Evidence of corrosion was also noticed.

What caused it? 

For want of HFO oil, the plant personnel were forced to pump LDO (Light Diesel Oil) through this HFO pump for the past one year.

Hence the I, A, R factors that contributed to this catastrophic failure are the following:

Initiator(s)I — factor(s), which triggers the problem — low viscosity of LDO compared to that of HFO was the significant ‘initiator’ in this case. While viscosity of LDO ranges from 2.5 to 5 cSt, the viscosity of HFO varies between 30 to 50 cSt (depending on the additives used). Use of lower viscosity oil ensured metal to metal contact thereby increasing Hertz stress that led to collapse of the hardened layer of the screws.

Accelerator(s)A — factor(s), which accelerates the process of failure —  a) Indian HFO does not contain friction modifiers such as vanadium and magnesium. Their absence causes higher friction between the screws (approximately 70 times increase in friction), which accelerates the wear process. b) Moreover, presence of vanadium and magnesium additives in HFO and LDO acts as anti-corrosive agents. Notice that the failure happened a year after the management decided to pump LDO rather than HFO through the HFO pump — enough time for corrosion to take effect. So, we may say that there are at least two factors that accelerated the failure process. There are other effects too on system performance, which we shall discuss in a moment (refer “Note”).

Retarder(s)R — factors that slow down the failure process — a) surface finish of the screws b) right clearance of the bearings c) presence of chromium in the screws.

Surface finish plays a very important role in reduction of metal to metal friction and also allows fluid film development. Ideally the surface finish should be between 3 to 6 microns CLA (Centre Line Average) for best effect. This can be introduced as a specification of the MOC (Material of Construction).

Similarly, excessive clearance in bearings would modify the hertz stress zone or profile — both in width and depth, which would cause shear of the hard layer (depth of which depends on the type of hardening and the type of steel used) and the soft layer (core material). Depth and type of hardening might also be specified in the MOC to prevent failures and extend life of the equipment. Presence of chromium in the metal would help formation of Vanadium – Oxygen – Chromium bond which would effectively enhance the life by providing better lubricating property which in turn would ensure a high level of  reliability of the equipment.

Hence, once the I, A and R s are identified appropriate measures can be taken to modify maintenance plan, MOC etc to ensure long life of the equipment without negative safety consequences (heart of reliability improvement).

Example:

  1. Specify addition of Vanadium and Magnesium in the HFO during supply or these may be added at site after receipt of supply. (Material specification during purchase)
  2. Ensure the right viscosity of oil to be pumps through HFO pumps. (Monitor viscosity of the supply oil — not higher than 50 cSt and not lesser than 30 cSt)
  3. Specify surface roughness of the screws — 3 to 6 microns (CLA).
  4. Specify depth of hardness of the screws (below 580 microns so that the interface between the hard layer and the soft core remains unaffected by the Hertz stress) during procurement and supply. Preferable type of hardening of the screws would be nitriding.
  5. Specify chromium percentage in the screws (during purchase).
  6. Monitor bearing clearance on a regular basis and change as needed (by vibration analysis based on velocity and acceleration parameters).
  7. Monitor the body temperature of the pump to notice adverse frictional effects
  8. Monitor growth of incipient failures in the screws by vibration monitoring (acceleration and displacement parameters)

Note

1. (Effect of IAR on system performance — i.e. the boiler – superheater – pipes):

Problems of high temperature corrosion and brittle deposits drastically impair the performance of high-capacity steam boiler of Power Plants, using HFO. Research* shows that heavy fuel oil (HFO) can be suitably burned in high capacity boilers. However, if HFO is chemically treated with an anticorrosive additives like Vanadium and Magnesium, it diminishes high temperature corrosion that affect some operational parameters  such as the pressure in furnace and pressure drop in superheaters and pipe metal temperature, among others like atomization and combustion processes. Therefore, inclusion of right additives like Vanadium and Magnesium have been found to diminish high-temperature corrosion and improved system performance.  It therefore makes sense to monitor these parameters, which can provide direct information on the degree of fouling, as well as of the effectiveness of the treatment during normal boiler operating conditions.

*Source

2. Effect of Vanadium Oxide nano particles on friction and wear reduction

Ref:

  1. Two approaches to improving Plant Reliability:
  2. Rethinking Maintenance Strategy:
  3. Applying IAR Technique:

By Dibyendu De

Two approaches to improve — Plant wide Equipment Reliability

The first approach is to conduct a series of training programs along with hand-holding. During such programs, participants apply the concepts discussed in the programs on the critical machines to modify the existing maintenance plan or methods to improve equipment reliability over a period of time. It is effective if the organization fulfills two vital conditions. First, the organization has in place a reasonably competent condition monitoring team and the use of condition based maintenance strategy is quite widespread in its acceptance and application throughout the plant. Second, the number of failures/component replacement in the plant in a year is not more than say 60. We would call this method — The Interactive Training Method.
The second approach is a more hands-on, direct and intensely collaborative. Each critical equipment is thoroughly examined in its dynamic condition to find out its inherent imperfections that cause failures to happen. Such imperfections, once identified by deep study, are then systematically addressed eliminate the existing and potential failure modes to improve MTBF and Safety. Based on the findings, the maintenance plan is formulated or appropriately modified to sustain the gains of implementing the findings. This activity is to be done during the program. This approach is effective when the failure rate in the plant is random and high (more than 60 failures/component replacement in a year) and/or maintenance load is heavy and repetitive along with high maintenance cost in spite of having a reasonably equipped condition monitoring team in place. We would call this — The Deep Dive Approach.   
 
Outline of the two methods: — The processes involved along with approximate costs. 
 
The Interactive Training Method:  
 
1. Such training sessions are conducted once every two months for a duration of 4 days each over a period of 24 months.
2. The training programs would essentially focus on the following == a) the RCM process focussed on Failure Modes b) Vibration Analysis c) Lubrication analysis and management d) Bearing failures and practical reasons e) Root Cause Failure Analysis method — FRETTLSM method. f) Friction, Wear Flow, Heat, g) Foundations and Structures. h) Condition Monitoring of Electrical failures i) Maintenance Planning based on nature of Failure Modes j) Life Cycle Costing k) Auditing RAMS (Reliability, Availability, Maintainability and Safety).that would help in self auditing the process — in total 12 programs
3. Accordingly, there would be 12 visits to the plant. During each visit one of the above topics would be covered. Once the improvement concepts are delivered, the participants (assigned for focussed plant improvement) would collaboratively engage in designing appropriate measures to improve or modify the existing maintenance plan of each critical machine to improve its MTBF and Safety. This activity that involves a fair amount of handholding would be done during the visit. Number of critical machines to be taken up for each visit would be decided by the management or participants. Number of participants = 10 maximum
4. Subsequent paid audits to refine the process would be optional — after the completion of 24 months intervention period.
The Deep DIve Approach:  
 
Such interactive sessions would be conducted once every two months for a duration of 4 days each over a period of 18 months.
2. Each interactive session of 4 days duration would focus on one critical equipment at a time. In total 9 critical equipment would be covered during the 18 months period with a selected group of people, assigned to the project of improving reliability. During each sessions each of the critical equipment would be examined deeply and in totality to find the inherent imperfections that cause different failures in the system.Once, these imperfections are identified, time is taken to appropriately address the “imperfections” and simultaneously formulate or modify the existing equipment maintenance plan for sustaining the gains on an on-going basis. This collaborative activity would be done during the program. In this process, participants learn by doing.
3. In total there would be 9 visits to the plant. During each visit one critical equipment would be taken up for the deep dive study taken to its full logical conclusion. Number of participants = 10 maximum.
4. Subsequent paid audits of the progress is optional.- after the completion of 18 months intervention period.

Rethinking Maintenance Strategy

As of now, maintenance strategy looks similar to strategy taken by the medical fraternity in themes, concepts and procedures.

If things go suddenly wrong we just fix the problem as quickly as possible. A person is healthy to the point when the person becomes unhealthy.

That might work fine for simple diseases like harmless flu, infections, wounds and fractures. And it is rather necessary to do so during such infrequent periods of crisis.

But that does not work for more serious diseases or chronic ones.

For such serious and chronic ones either we go for preventive measures like general cleanliness, hygiene, food and restoring normal living conditions or predictive measures through regular check ups that detects problems like high or low blood pressures, diabetes and cancer.

Once detected, we treat the symptoms post haste resorting to either prolonged doses of medication or surgery or both, like in the case of cancer. But unfortunately, the chance of survival or prolonging life of a patient is rather low.

However, it is time we rethink our strategy of maintaining health of a human being or any machine or system.

We may do so by orienting our strategy to understand the dynamics of a disease. By doing so, our approach changes radically. For example. let us take Type 2 diabetes, which is becoming a global epidemic. Acute or chronic stress initiates or triggers the disease (Initiator). Poor or inadequate nutrition or wrong choice of food accelerates the process  (Accelerator) whereas taking regular physical exercise retards or slows down the process (Retarder). Worthwhile to mention that the Initiator(s), Accelerator (s) and Retarder (s) get together to produce changes that trigger of unhealthy or undesirable behavior or failure patterns. Such interactions, which I call ‘imperfections‘ between initiator (s), accelerator (s) and retarder (s) change the gene expression which gives rise to a disease, which often has to be treated over the entire lifecycle of a patient or system with a low probability of success.

The present strategy to fight diabetes is to modulate insulin levels through oral medication or injections to keep blood sugar to an acceptable level. It often proves to be a frustrating process for patients to maintain their blood sugar levels in this manner. But more importantly, the present strategy is not geared to reverse Type 2 diabetes or eliminate the disease.

The difference between the two approaches lies in the fact — “respond to the symptom” (high blood sugar) vs “respond to the “imperfection” — the interaction between Initiators, Accelerators and Retarders”. The response to symptom is done through constant monitoring and action based on the condition of the system, without attempting to take care of the inherent imperfections. On the other hand, the response to imperfections involve appropriate and adequate actions around the I, A, R s and monitoring their presence and levels of severity.

So a successful strategy to reverse diabetes would be to eliminate or avoid the initiator (or keep it as low as possible); weaken or eliminate the Accelerator and strengthen or improve the Retarder. A custom made successful strategy might be formulated by careful observation and analysis of the dynamics of the patient.

As a passing note, by following this simple strategy of addressing the “system imperfections“, I could successfully reverse my Type 2 Diabetes, which even doctors considered impossible. Moreover, the consequences of diabetes were also reversed.

Fixing diseases as and when they surface or appear is similar to Breakdown Maintenance strategy, which most industries adopt. Clearly, other than cases where the consequences of a failure is really low, adoption of this strategy is not beneficial in terms of maintenance effort, safety, availability and costs.

As a parallel in engineering, tackling a diseases through preventive measures is like Preventive Maintenance and Total Productive Maintenance — a highly evolved form of Preventive Maintenance. Though such a strategy can prove to be very useful to maintain basic operating conditions, the limitation, as in the case of human beings, is that it does not usually ensure successful ‘mission reliability’  (high chance of survival or prolonging healthy life to the maximum) as demonstrated by Waddington Effect. (You may refer to my posts on Waddington Effect here 1 and here 2)

Similarly, predictive strategy along with its follow up actions in medical science, is similar to Predictive Maintenance, Condition Based Maintenance and Reliability Centered Maintenance in engineering discipline. Though we can successfully avoid or eliminate the consequences of failures; improvement in reliability (extending MTBF — Mean Time Between Failures) or performance is limited to the degree of existing “imperfections” in the system (gene expression of the system), which the above strategies hardly address.

For the purpose of illustration of IAR method, you may like to visit my post on — Application of IAR technique

To summarize, a successful maintenance strategy that aims at zero breakdown and zero safety and performance failures and useful extension of MTBF of any system may be as follows:

  1. Observe the dynamics of the machine or system. This might be done by observing  energy flows or materials movement and its dynamics or vibration patterns or analysis of failure patterns or conducting design audits, etc. Such methods can be employed individually or in combination, which depends on the context.
  2. Understand the failures or abnormal behavior  or performance patterns from equipment history or Review of existing equipment maintenance plan
  3. Identify the Initiators, Accelerators and Retarders (IARs)
  4. Formulate a customized comprehensive strategy  and detailed maintenance and improvement plan around the identified IARs keeping in mind the action principles of elimination, weakening and strengthening the IARs appropriately. This ensures Reliability of Equipment Usage over the lifecycle of an equipment at the lowest possible costs and efforts. The advantage lies in the fact that once done, REU gives ongoing benefits to a manufacturing plant over years.
  5. Keep upgrading the maintenance plan, sensors and analysis algorithms based on new evidences and information. This leads to custom built Artificial Intelligence for any system that proves invaluable in the long run.
  6. Improve the system in small steps that give measureable benefits.By Dibyendu De

 

 

The Case of Burning BagHouse Filters

Recently I was invited to investigate a case of frequent burning of baghouse filter bags.

There were five such baghouses connected to five furnaces of a steel plant.

The client reasoned that the material of the bags was not suitable for the temperature of the gas it handled. However, with change of material the frequency of bag burning did not change. So it needed a different approach to home onto the reasons for the failures.

Hence, this is how I went about solving the case:

First I did a Weibull analysis of the failures. Engineers use Weibull distribution to quickly find out the failure pattern of a system. Once such a pattern is obtained an engineer can then go deeper in studying the probability distribution function (pdf). Such a pdf provides an engineer with many important clues. The most important clue it provides is the reason for such repeated failures, which are broadly classified as follows:

  1. Design related causes
  2. Operation and Maintenance related causes
  3. Age related causes.

In this case it turned out to be a combination of Design and Age related causes.

It was a vital clue that then guided me to look deeper to isolate the design and age related factors affecting the system.

I then did a modified FMEA (Failure Mode and Effect Analysis) for the two causes.

The FMEA revealed many inherent imperfections that were related to either design or aging.

Broadly, the causes were:

  1. Inability of the FD cooler (Forced Draft cooler) to take out excess heat up to the design limit before allowing the hot gas to enter the bag house.
  2. Inappropriate sequence of cleaning of the bag filters. It was out of sync with the operational sequence thus allowing relatively hot dust to build up on the surface of the bags.

Next, the maintenance plan was reviewed. The method used was Review of Equipment Maintenance (REM). The goal of such a review is to find maintenance tasks that are either missing or redundant for which new tasks are either added/deleted or modified. With such modification of the maintenance plan the aim is to achieve a balance between tasks that help find out incipient signals of deterioration and tasks that would help maintain longevity and stability of the system for a desired period of time.

Finally the investigation was wrapped up by formulating the Task Implementation Plan (TIP). It comprised of 13 broad tasks that were then broken up into more than 100 sub-tasks with scheduled dates for completion and accountability.

 

Observing Complexity

To me, observing real life systems is something like this:

A real life System comprises of a meaningful set of objects, diverse in form, state and function but inter-related through multiple network of interdependencies through mutual feedbacks enclosed by variable space, operating far from its equilibrium conditions not only exchanging energy and matter with its environment but also generating internal entropy to undergo discrete transformation triggered by the Arrow of Time forcing it to behave in a dissipative but self organizing manner to either self destruct itself in a wide variety of ways or create new possibilities in performance and/or behaviour owing to presence of ‘attractors’ and ‘bifurcations’; thereby making it impossible to predict the future behaviour of the system in the long term or trace the previous states of the system with any high degree of accuracy other than express it in terms of probabilities since only the present state of the system might be observable to a certain extent and only a probabilistic understanding may be formulated as to how it has arrived at its present state and what would keep it going, thus triggering creative human responses to manage, maintain and enhance the system conditions, function and purpose and create superior systems of the future for the benefit of the society at large.

Such a representation of an observation looks quite involved. Perhaps it might be stated in a much simpler way. Most real life systems behave in a complex manner creating multitude of problems of performance and failures. But how do we get rid of complexity and uncertainty as exhibited by systems? We may do so by deeply observing the complex behaviour of the system to improve our perception to gain insights about the essence of the system; find out the underlying ‘imperfection’ that causes the apparent complexity and uncertainty and then find ways to improve the existing system or create new system and maintain them in the simplest possible manner. We do this by applying the principles of chaos, reliability and design. Surprisingly, the same process might be used to troubleshoot and solve problems we face on a daily basis. If done, we are no longer dominated or dictated by the ‘special whims’ of the system.

The crux of the matter is how we observe reality and understand it so as to make meaningful choices as responses to life and living.

Who is the best Advisor?

In an increasingly complex world, we need to consult advisors for many things. It helps in our decision making and taking appropriate actions. Even, President Obama, presumably the most powerful man in the world, depends on a long list of advisors to be effective. But how does one judge the quality of an advisor?

The best advisors are those who can see through a system quickly, find potential problems that are yet to manifest and correct those through right design. Such advisors would not be known outside their select group of intelligent clients.

Similarly, good advisors are those who can examine a system, find inconsistencies, contradictions and incipient faults and come up with ideas that would prevent such faults from re-appearing in the future. They would be hardly known outside their select group of clients.

But ordinary advisors are those who would energetically open up things, examine parts of the system, enter into prolonged discussions without specific outcomes, involve lot of people, forcefully present their ideas to others and encourage them to tackle their problems in a firefighting mode without the assurance of a total cure. They are widely known to people who matter most.

This insight came from my reflecting upon a ancient Chinese story, which follows:

According the old story, a lord of ancient China once asked his doctor, a member of a family of healers, “Which of them was the most skilled in the art of healing?”

The physician, whose reputation was such that his name became synonymous with medical science in China, replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name doesn’t get out of the house. My elder brother cures sickness when it is still extremely minute, so his name doesn’t get out of the neighbourhood. As for me, I puncture veins, prescribe portions and massage skins, so from time to time my name gets out and is heard among the lords.”

A Ming dynasty critic writes of this little tale of the physician: “What is essential for leader, generals and ministers in running countries and governing armies is no more than this.”

Creativity in Solving Complex Problems

The other day, at the end of my seminar on “Solving Complex Engineering Problems” a delegate asked me as to whether the entire process of solving complex problems can be automated in some way by means of a software instead of relying on human creativity.

Such a response wasn’t unexpected. In the corporate world the word “creativity” is often looked at with suspicion. They would rather prefer structured and standard approaches like “brainstorming” at 10.00 am sharp or team work or collaborative effort, which in my opinion do little to help anyone solve complex problems or even address complex problems correctly.

That might be the single most important reason why “complex problems” remain unresolved for years affecting profitability and long term sustenance of an organization. Failing to resolve complex problems for years often earns such problems the sobriquet of “wicked problems”, which means that such problems are too tough for “any expert” to come to grips with.

What they sadly miss out is the role of creativity in solving complex problems, which no automation or technology can ever replicate. They miss this because most organizations systemically smother or mercilessly boot out any remnant of creativity in their people since they think that it is always easier to control and manage a regimented workforce devoid of even elementary traces of creativity.

So, is managing creativity and creative people a messy affair? On the surface it seems so. This is simply because we generally have a vague idea of what drives, inspires and really sustains creativity?

Creativity is not about wearing hair long or wearing weird clothes, singing strange tunes, coming to office late and being rude to bosses for no apparent reasons. These things hardly make anyone creative or help anyone become a more creative person.

Actually, things like “being attentive and aware”, “sensitive”, “passionate”, “concerned”, “committed” and above all “inventive” just might be the necessary ingredients to drive, inspire and sustain creativity.

Why?

Though there are many ways of describing and defining creativity what I like best is – “creativity is the expression of one’s understanding and expression of oneself” – deeper the understanding better the expression of creativity.

When we look at creativity in this manner it is obvious that we are all creative though the expression and its fidelity might vary to a great extent. Clearly, some are simply better than others.

Further, if creativity may be thought about as a process, then the inputs and the clarity of understanding of ourselves are more valuable elements of the system than the outputs that the process anyway consistently churns out (remember the uncountable hours we spent in organization meeting, discussing and brainstorming to solve complex problems).

In these days of economic depressions, organizations can really do themselves a huge favor if only they pay more attention to facilitating such inputs to people rather than get overtly worried about control and management by conformity.

Developing Non-linear Thinking Skills

We know while comprehending complexity, linear logic fails. That appears to the most important reason as to why most people find it difficult to understand complex situations or grapple with complex problems.

With simple linear logic, principles come first and deductions follow. Hence the process may be described as:

Observe -> Model the observations based on relevant domain theory -> apply Principles/mathematics -> Deduction

Fair to say that this standard approach, based on linear logic, is used in science and engineering to solve linear problems. Since this is an efficient way of thinking it dominates our educational, professional and social lives.

But when it comes to solving non-linear complex problems (unfortunately most life problems are non-linear) application of linear logic fails. Instead what is needed is the development of non-linear thinking skills.

In fact, non-linear thinking style is a necessary skill with the larger theoretical framework of digital literacy through multiple format known as transliteracy through transmedia learning environment.

Nonlinear thinking styles are defined as using intuition, insight, creativity and emotions when comprehending and communicating information (Vance, Groves, Paik and Kindler, 2007)

But how to develop non-linear thinking skills?

I would give below a 3 step approach, one of the many approaches I developed for the specific purpose of developing non-linear thinking skills of my adult professional students. This specific technique is christened as the Fugue technique.

1. Think in terms of fugue. In a fugue, all the notes cannot be constrained into a single melodic scale. Compressing everything into one single melodic scale is analogous to modelling a phenomenon or behaviour based on a high level of abstraction, which is the dominant characteristic of linear thinking style. Make this clear to the participating group. It would relieve them of the unnecessary stress of finding the “one right answer” or “one right approach” to a complex problem.

2. Bring people together to tackle a complex problem. Make sure that the participants are familiar with the problem. This means that complex problems are to be selected from the familiar working environment of the participants or problems the participants have grappled with but failed to find a solution.

Putting a number of people together gives us a big advantage. Different people see the same problem in different ways. It would depend on their specific strengths and mental makeup, tendency and practice. Some find some parts of the problem easy to see and understand, which others might find too difficult to even notice. Each member of the group is then encouraged to focus on some parts of the problem that comes easily to them so as to come up with their own unique perspectives and understanding.

Before allowing people to jump in, preferably use different media to present a problem — narrative, story telling, printed material, videos, pictures, data, internet references etc.

3. Invite the group to plunge directly into the midst of things and follow the temporal order created by the thoughts of the different group members. Build upon each others thoughts. Never mind if we get different strings of thoughts to build different lines of thinking, which is the most desired output. Encourage all forms of communication — dialogs, debates, discussions, collaboration, negotiation, etc. Be patient with the flow of time. Activities might show sudden bursts of energy at various points of time. Allow people to express their thoughts through different media – verbal, slide shows, discussions, drawings, doodles, story telling, narratives/presentations, logical interpretation through principles, etc. It is expected that each member communicates in his/her preferred style of communication.

Link the different strings of thoughts or different perspectives to make a collective but coherent understanding of a complex problem without attempting to put them into “one melodic scale.” It means that it is not necessary to align the different perspectives into one linear path. Multiple paths are encouraged. Expecting multiple solutions would be the norm. The output measured against time is exponential when compared to linear approach. It helps in increasing both depth and width of learning. In Nemetic terms the resultant ecology is known as nemePlx or nPx

When a group performs this exercise on many live problems over a span of few days (a four-day long session appears to be just enough), it propels the students to develop their non-linear thinking skills. It also develops their transliteracy skills (a non-linear thinking skill) immersed in transmedia learning environment.

Note:

1. This Fugue technique has been extensively used for Power Plant professionals solving their complex problems.

2. The author is of the opinion that non-linear thinking skills cannot be taught in any explicit manner.

References:

!. Digital Literacy: A Demand for Nonlinear Thinking Styles Mark Osterman, Thomas G. Reio, Jr., and M. O. Thirunarayanan Florida International University, USA http://digitalcommons.fiu.edu/cgi/viewcontent.cgi?article=1321&context=sferc

2. Now you see it: How the brain science of attention will transform the way we live, work, and learn Davidson, C., (2011). New York, NY: Penguin

Learning Quickly & Adapting Rapidly – A Simple View

If I were to make a very simplified understanding of our brain it would be this:

Our brain has three parts, which are: –

1. The Rear Brain

2. The Mid Brain

3. The Frontal Brain

The Rear Brain

The rear part of the brain is an alarm, which sets off as soon as it senses danger that can threaten survival and life. It works on the principle of ‘fear’ (the modern term is stress) that propels us to either fight or run away. When faced with anything new this part of the brain triggers first. Though for city dwellers, tigers and snakes are mostly not around to scare us to death, this ancient part of our brain sets off alarms by sensing anything which is unusual, uncommon, seemingly big for us to handle, new or doesn’t fit our regular routine or schedule. But isn’t learning all about embracing something new? So we have a big problem to learn quickly and adapt rapidly to changing situations.

The Mid Brain

This part of the brain stores all our sensations and experiences as images including the lessons we learn. It is the memory section. It throws up information as and when we need those. So when faced with something new this part of the brain searches for something similar and prompts us to take note of what is already stored there for us to act. At times, it conjures up new images by combining existing images some of which can be illusory or false, which may create stress or delusion. Under stress, it communicates to the rear brain triggering fight or flight response. When deluded it induces us take actions without thinking of undesirable consequences. Now, these become big problems to learn anything new or different when faced with familiar objects or situations making it difficult for us to pick out something new or different from seemingly familiar patterns. The mid brain would say, “You know that. There is nothing new in the world.” This is because mid brain would force us recognize existing patterns only, which usually prompts routine or scripted behavior as a response. This then poses as a big impediment to learn quickly and adapt rapidly to changing situations.

The Frontal Brain

This is the new part of the brain that is responsible for learning from any situation and under any condition enabling us to create new solutions and new actions. However, this part of the brain isn’t powered up fully so long the mid brain and the rear brain dominate the show. That appears to be a big problem too for learning quickly and adapting rapidly to changes.

So what is the way out?

The way out of the mess may be summed up in a neat mantra — 3S which stands for Slow, Small and Steady.

Slow:

Slowing down offers many benefits. The most important one is relaxation of the body and mind. Once the body and mind are relatively relaxed, the rear brain, which is usually very alert lets down its guard allowing other parts of the brain to act fully. This facilitates learning something new.

Small:

When we notice small and subtle things; think in small pieces and connect those; and take small actions – the rear brain doesn’t interfere since it doesn’t consider small things to pose any danger to survival. Likewise, when we see, think and do small things the mid brain doesn’t quite interfere with the new experience either since it usually fails to conjure up an existing pattern to match the small experiences other than trying to judge by giving it a name and form . So, once we suspend our judgement while experiencing something new the possibility of new learning grows exponentially. However, once the small things are done the mid brain would faithfully store the lessons for better adaptation and survival in the future.

Steady:

So what happens when, over a time, we steadily exchange value through small actions? Obviously, the small actions accumulate, coalesce, combine and recombine in self organizing way to produce new learning, which usually grows wide and deep enough to allow us learn quickly and adapt rapidly to changing situations.

Go Slow. See Small. Engage Slowly, Think Small. Act Small. Go Steady.

That is perhaps the easiest way to learn new things quickly and adapt rapidly to changes promoting resilience and sustainability for organizations, groups, communities and individuals.

Note: This is a part of a forthcoming book — “Sleeping with a Stranger” — a new book belonging to the Nemetics series.