I’m writing this between lectures and dinner on the first of a 2-day FMEA forum in Osnabrück and I’m trying to figure out what to make of it all. If that opening sentence fails to give the impression that I’m bubbling with enthusiasm or energy from the day, it’s because - well, I’m not. Of course, sitting around being talked at isn’t the most energising of inactivities itself, but the content shown so far hasn’t fired me up in any significant way.
The theme of the forum is “FMEA success stories” - but those stories have been in fact pretty sparse thus far. Two of today’s main presentations were about their respective companies' efforts and struggles to implement strong FMEA systems and culture into their workflows. One company gave an update on its mission (now into its fifth year of ‘x: unknown’) to implement an FMEA software system and methodology into the group. The other gave a shorter overview of how they’re getting on (or not) as they make a start on the challenge. The takeaway from these presentations was the obvious: yes, it’s difficult to move away from scattered, ineffective but audit-tick-boxable Excel files to a centralised monolith. And, yes, you need executive and management support for such an undertaking. But not even from the company that is so far along the road did I hear a story about benefits. They have basically arrived, but where, and why? I didn’t hear any anecdotes about finding otherwise hidden potential failures, about reducing potential quality issues by humungous amounts, anything like that
One presenter plucked up the courage to show his efforts to convince a director to invest in FMEA via the means of monetary value (another key theme of this forum). Our presenter’s take was that robust and reusable FMEAs help to prevent project overruns - (on the basis that you won’t be validating things too late, and when validated, then with the expectation of OK results), so the most convincing financial metric was in fact time - a shaky assumption if ever there was one, and one that nobody could pluck up the heartlessness to destroy in the Q&A afterwards.
There is a close-knit group of FMEA gurus in Germany, who attend each and every one of these forums. They consult for others, and many of their clients were here, too - so there is certainly a self-appreciative air to the proceedings, of their being a natural and self-explanatory part of the engineering world. Data is less available. But one guru at least mentioned that his research team studied around 500 FMEAs before and after switching to more robust software and methods: the robuster methods rooted out around 30% more potential failure causes than the older versions. He did proceed to weaken his argument somewhat by stating that most of these were repeat or piffling items, and there was no factoring of the “novelty” problem (would these new causes have been arrived at had the systems been analysed with fresh eyes, albeit using older methodologies?) but at least a couple of the new ones could be treated as being noteworthy.
So - these success stories we’re talking about. They are where, exactly?
There was a presentation on making FMEA meetings more effective by trying to eliminate discussions on rating causes (occurrence and detection ratings), and by highlighting the financial aspect of potential risk - but that’s still a very inward-looking process improvement.
A further presentation from two “big” Americans (in the sense of being FMEA-gurus from America) tried to show the differences between the AIAG and the VDA-described methodologies - but that was a thoroughly overblown observation. When I asked if anyone had “raced” AIAG against VDA on the basis of one common design, to test the theory, or even to try and see if one method favoured one type of result over another, the answer I received was an at least nicely succinct “no.”
That discussion all boiled down to “it doesn’t matter which method you use, as long as you use it properly.”
To turn the focus back on success in FMEAs: how can it be measured? Of course, it’s the nature of the FMEA that potential failures are thought up, thought out and minimised - and those thoughts involve (or should involve) a lot of internal company knowledge and evidence. So nobody wants to (or is permitted to) talk about specifics - but with the forum having been so entitled, I’d have thought that the organisers would have found a way to try to tell the stories in a stronger way.
Perhaps, though, in the context of such a forum, they felt they were preaching to the converted, anyway, so didn’t need to do much “selling”.
Click ‘Go’ to start
Generally what an FMEA is trying to do with a mechanical system is to bug-check its logic. Perhaps we could consider a better tool from software development (How Google Tests Software, for example), where a model is created that through a multitude of test runs can quickly highlight the potential week points. That would be even more difficult than manually thinking things through - but cleverer. Perhaps that’s simply where we stand right now: we’re chipping away with post stone-age but pre-steam tools - and doing a pretty decent job of it, we think.
I know, let’s look in the FMEA
A common selling point of the FMEA is its potential (that word again!) to capture a company’s knowledge through lessons learned, updates and actions, linking to evidence and reports, etc. I was also sold on that for a while, but right now I’m less confident about it. After the “5 year-mission to explore new worlds” presentation, I asked how many of that company’s development engineers use the FMEA software. The answer was: none. Of course expert systems can be difficult to drive, but it seems strange to me to have to rely on external moderators to create the FMEAs, and then on searching through pdf documents to find key lessons learned and design considerations.
Where does FMEA belong?
Many of the FMEA colleagues I met so far came from Quality Management - which I feel should be the parking spot for completed FMEAs. But FMEAs that are still themselves under development (theoretically in parallel with the ahem new product that is gestating) should reside with the product- or process-development teams.
I raised this point later over beers and dinner: others countered that engineering students don’t learn about FMEAs in any meaningful way, so they can’t be expected to maintain one as part of their jobs in the same way that quality engineers do: but that’s more of a comment on engineering education than on any philosophical decision to shield development engineers from such shudderingly plodding work as FMEAs (to put a negative twist on it).
Go on, have another beer and lighten up
The key to these FMEA forums is, of course, the networking. Everybody in the field, regardless of product, industry or division (quality or development) has basically the same problems with FMEAs. But few really try to come to terms with what it means, and what benefits the system has. So it was good to meet a few skeptics among the herd who saw the presentations as well-meaning but non-value-add guff, and who also saw the principal value of such forums in the people themselves.
The evening dinner of local Grünkohl and Pinkel sausages in a local Osnabrück brewery (Rampendahl) made for fascinating conversation, especially as I ended up sitting amongst three of the world’s key FMEA software developers.
But now, after some noticeable time-hopping, it’s time to upload this post, grab some breakfast, and get ready for the onslaught of the second day…