← Home About Archive Dramatis personae Reading Photos Replies Also on Micro.blog
  • Presenting... the Phishikawa diagram!

    Why I'm writing this

    Extracting something interesting to say out of what feels like a good - but perhaps vague - initial notion can be tricky and frustrating, especially for someone like this engineer trying to be philosophical about the engineering life. That's what happened when I was working on a now postponed post: I found myself getting stuck in the descriptions, on the surface, without finding a way to get satisfyingly deep into what I was trying to say: more drifting than drafting.

    Then I remembered another (unfinished) draft from an (unfinished) series on engineering tools and I thought it time to put that tool - the Ishikawa diagram - to use.

    The Ishikawa diagram

    Also known as the fishbone diagram because of the way several themes jut out from a central theme's spine, this isn't as uniquely an engineering tool as the FMEA, but it is much used in industrial settings, mostly in the context of quality investigations. Looking back, I found a wry post of mine about it from back in 2012, where, whilst being more fixated on the overly literal depiction of the fishbone diagram, I listed out the typical topics for a quality investigation (the several 'M' words like "Machine", "Manpower", "Metrology", etc, as titles).

    Fundamentally, it's about seeding ideas to be condensed around common themes to ensure that nothing obvious is overlooked when searching for a root cause. It's important to note here that it can also be an efficient tool, in that clearly irrelevant or non-contributing aspects can quickly be disregarded, allowing focus on the trickier stuff.

    The Phishikawa diagram...

    ... or "Phishbone" diagram, is the philosophical version of the original. Its bones consist of key philosophical themes to aid in the search for the roots of the topics we want to consider. In this case, it won't initially be very efficient, as I'll need to spend time with each topic, these being:

    • Ontology (attempting to describe how a topic is)
    • Epistemology (understanding what we can know about the topic)
    • Acting and working (understanding what modes of action and knowledge are involved - based on Aristotle's concepts of poiesis (making) & praxis (politics))
    • Hermeneutics (how we interpret texts and artefacts resulting from this topic)
    • Phenomenology (how we experience this topic personally)

    This is what it looks like, in its first iteration:

    ThePhisikawaPhishboneDiagram_202308

    (I used the fishbone diagram template in Xmind to create this version) I have also created a separate page for the Phishikawa diagram on this site, here

    My hope is that using this as a form of mental scaffolding, I can construct a more detailed analysis of whichever topic I want to consider. But don't worry, I'll try to keep the text chatty and legible nevertheless!

    → 5:30 PM, Aug 26
  • Today's two days of FMEAs

    I'm writing this between lectures and dinner on the first of a 2-day FMEA forum in Osnabrück and I'm trying to figure out what to make of it all. If that opening sentence fails to give the impression that I'm bubbling with enthusiasm or energy from the day, it's because - well, I'm not. Of course, sitting around being talked at isn't the most energising of inactivities itself, but the content shown so far hasn't fired me up in any significant way.
     
    The theme of the forum is "FMEA success stories" - but those stories have been in fact pretty sparse thus far. Two of today's main presentations were about their respective companies' efforts and struggles to implement strong FMEA systems and culture into their workflows. One company gave an update on its mission (now into its fifth year of 'x: unknown') to implement an FMEA software system and methodology into the group. The other gave a shorter overview of how they're getting on (or not) as they make a start on the challenge. The takeaway from these presentations was the obvious: yes, it's difficult to move away from scattered, ineffective but audit-tick-boxable Excel files to a centralised monolith. And, yes, you need executive and management support for such an undertaking. But not even from the company that is so far along the road did I hear a story about benefits. They have basically arrived, but where, and why? I didn't hear any anecdotes about finding otherwise hidden potential failures, about reducing potential quality issues by humungous amounts, anything like that
     
    One presenter plucked up the courage to show his efforts to convince a director to invest in FMEA via the means of monetary value (another key theme of this forum). Our presenter's take was that robust and reusable FMEAs help to prevent project overruns - (on the basis that you won't be validating things too late, and when validated, then with the expectation of OK results), so the most convincing financial metric was in fact time - a shaky assumption if ever there was one, and one that nobody could pluck up the heartlessness to destroy in the Q&A afterwards.
     
    There is a close-knit group of FMEA gurus in Germany, who attend each and every one of these forums. They consult for others, and many of their clients were here, too - so there is certainly a self-appreciative air to the proceedings, of their being a natural and self-explanatory part of the engineering world. Data is less availble. But one guru at least mentioned that his research team studied around 500 FMEAs before and after switching to more robust software and methods: the robuster methods rooted out around 30% more potential failure causes than the older versions. He did proceed to weaken his argument somewhat by stating that most of these were repeat or piffling items, and there was no factoring of the "novelty" problem (would these new causes have been arrived at had the systems been analysed with fresh eyes, albeit using older methodologies?) but at least a couple of the new ones could be treated as being noteworthy.
     

    So - these success stories we're talking about. They are where, exactly?

    There was a presentation on making FMEA meetings more effective by trying to eliminate discussions on rating causes (occurrence and detection ratings), and by highlighting the financial aspect of potential risk - but that's still a very inward-looking process improvement.
     
    A further presentation from two "big" Americans (in the sense of being FMEA-gurus from America) tried to show the differences between the AIAG and the VDA-described methodologies - but that was a thoroughly overblown observation. When I asked if anyone had "raced" AIAG against VDA on the basis of one common design, to test the theory, or even to try and see if one method favoured one type of result over another, the answer I received was an at least nicely succinct "no."
     
    That discussion all boiled down to "it doesn't matter which method you use, as long as you use it properly."
     
    To turn the focus back on success in FMEAs: how can it be measured? Of course, it's the nature of the FMEA that potential failures are thought up, thought out and minimised - and those thoughts involve (or should involve) a lot of internal company knowledge and evidence. So nobody wants to (or is permitted to) talk about specifics - but with the forum having been so entitled, I'd have thought that the organisers would have found a way to try to tell the stories in a stronger way.
     
    Perhaps, though, in the context of such a forum, they felt they were preaching to the converted, anyway, so didn't need to do much "selling".
     

    Click 'Go' to start

    Generally what an FMEA is trying to do with a mechanical system is to bug-check its logic. Perhaps we could consider a better tool from software development (How Google Tests Software, for example), where a model is created that through a multitude of test runs can quickly highlight the potential week points. That would be even more difficult than manually thinking things through - but cleverer. Perhaps that's simply where we stand right now: we're chipping away with post stone-age but pre-steam tools - and doing a pretty decent job of it, we think.
     

    I know, let's look in the FMEA

    A common selling point of the FMEA is its potential (that word again!) to capture a company's knowledge through lessons learned, updates and actions, linking to evidence and reports, etc. I was also sold on that for a while, but right now I'm less confident about it. After the "5 year-mission to explore new worlds" presentation, I asked how many of that company's development engineers use the FMEA software. The answer was: none. Of course expert systems can be difficult to drive, but it seems strange to me to have to rely on external moderators to create the FMEAs, and then on searching through pdf documents to find key lessons learned and design considerations.
     

    Where does FMEA belong?

    Many of the FMEA colleagues I met so far came from Quality Management - which I feel should be the parking spot for completed FMEAs. But FMEAs that are still themselves under development (theoretically in parallel with the *ahem* new product that is gestating)  should reside with the product- or process-development teams.
     
    I raised this point later over beers and dinner: others countered that engineering students don't learn about FMEAs in any meaningful way, so they can't be expected to maintain one as part of their jobs in the same way that quality engineers do: but that's more of a comment on engineering education than on any philosophical decision to shield development engineers from such shudderingly plodding work as FMEAs (to put a negative twist on it).
     
    Go on, have another beer and lighten up
    The key to these FMEA forums is, of course, the networking. Everybody in the field, regardless of product, industry or division (quality or development) has basically the same problems with FMEAs. But few really try to come to terms with what it means, and what benefits the system has. So it was good to meet a few skeptics among the herd who saw the presentations as well-meaning but non-value-add guff, and who also saw the principal value of such forums in the people themselves.
     
    The evening dinner of local Grünkohl and Pinkel sausages in a local Osnabrück brewery (Rampendahl) made for fascinating conversation, especially as I ended up sitting amongst three of the world's key FMEA software developers.
     
    But now, after some noteceable time-hopping, it's time to upload this post, grab some breakfast, and get ready for the onslaught of the second day...
    Related articles
    Please read this procedure
    Fresh perspectives and fresh air in Detroit
    Bridging the tumult and the oasis
    Statistical data on global software development industry / market size
    Sustaining and disruptive innovations
    → 12:20 PM, Feb 25
  • On our friends in Quality

    Why do we have quality departments? Even posing the question feels like a minor rebellion on my part, since "Quality" is such an integral concept in the business of producing things. For "integral" we can also read "taken for granted." Sometimes it can be good to tug at the tent poles of the status quo and see whether it collapses in a heap around us, or if we can move it somewhere more interesting.

    This post isn't just coming from a theoretical standpoint, of course. I've had so many run-ins with colleagues from quality that I simply have to question that whole role. Too often have they acted as alarmist messengers and chasers of other peoples' actions whilst understanding neither the problem nor the product nor the processes that they are supposed to be monitoring - that is, seemingly doing nothing about it themselves. They don't add value, nor do they seem to save value - they transmit urgency. Unfortunately, transmitting urgency to the wrong sort of person (for example, me) is precisely the least good way of getting things resolved quickly and effectively.

    So, whilst we all inherently understand the idea of producing things in ways acceptable both to customers and to our finances, do we really need quality departments, quality managers, quality engineers? I think we could be cannier about the whole thing - and ditch them totally.

    The best way to do that would be to design an engineering company from scratch - alas, not something that I've ever done. But for me, the basic idea would be that the tasks that quality departments undertake can be distributed amongst other departments where those tasks make more sense.

    The flip-flop question is this: is it better to decentralise those tasks amongst experts, or to collect them all under one roof and standardise? It may well be that the size of a company - tipping around a certain critical morass - is probably a key factor in this decision. But the prospect of ISO / TS audits should not be!

    I work for a largish automotive supplier, and we have quality departments coming out of our ears: it's the world I live in. Deleting them (distributing them) would mean for me personally that I would need to get my designs, from material selection through to defining tolerances, just right; my colleagues in production would have to design their processes to produce within the parameters of the drawings and specs I give them and that all needs to be validated and monitored.

    In other words, nothing: eliminating the quality department would have no effect on my key role, nor on the role of my friends in process engineering.

    However, I'm forever getting involved in quality concerns, analysing returned or quarantined parts, performing root-cause analyses. This can help me understand our product better, but rarely have I had to change a design as a result of these investigations. Enfolding one or two of our colleagues from quality into our test and development organisation would mean that those defects could ben analysed as thoroughly as they are now, and lessons learned could be centralised, whilst freeing me to work on the next generation of products, which is what, I believe, I'm supposed to be doing.

    Manufacturing will still need people and systems to check and to confirm that product meets requirements, both at any given instant and over time - that's for sure; such roles will never go away, but these days we can trust our production team's integrity to manage that by themselves rather than abdicating that responsibility to a separate department.

    The role of a quality department as independent arbiter, a kind of internal auditor, remains an interesting point: can we be sure that Manufacturing, for example, will always be open and honest enough about things like tool release decisions and scrap rates when deliveries or performance indicators are on the line? Well, if we can't be sure then it would surely be beneficial to work on the culture of the company rather than to add layers of oversight. Again, it's a question of opportunity cost - would it be more effective for a company to pay people to police their own people, or to research and to produce things more effectively?

    If we think (or click) back to my post on how Google tests software, we see that the key philosophy there is: the design of the product should include its testing and confirmation methods. So it should be the  product and process design engineers designing this together, which makes sense to me.

    And where would the resources come from to ensure that product and process design could cope with that additional work? From the now redundant quality department, of course.

    That leaves the audit side of things. Of course, nobody wants to be involved in audits at any stage. But since audits are a fact of life in many industries, the product and process design teams need to be involved in designing the business systems that they use on a daily basis to use, store, distribute and to archive work. And they should be responsible for proving that these ways work both in theory and are used in practice.

    So, eliminating quality departments, partially reassigning them to design and development, is the obvious answer to all of this - yet quality departments seem to grow and grow where I am. Am I missing something obvious that would really justify the existence of quality departments? Or can we continue to "Think Canny" in this funny old business of engineering?

    Slice 1

    → 1:30 AM, Dec 10
  • 'Tis the season to be... audited

    December. Time of cheer and good will to all colleagues, rushing for presents, updating projects, Glühwein, clearing out inboxes, eating far too much chocolate, finalising reports and… 

    And getting audited. 

    Yes, we were audited this week and one of my projects was in the spotlight. It was all going swimmingly until the auditor heat-seekingly locked on to one particular thread of my project that wasn't really parcelled up and tied with string - ironically enough, the DFMEA.

    Being shown up as lax in my own project was certainly embarrassing, one of those half-expected shocks to the system; I felt a bit like a child hoping rather than expecting not to be found out about those stolen chocolates. I was hoping rather than expecting to be able to skim over that the incomplete DFMEA (structure present and correct, values not), knowing that it was a weakness without really having polished it off beforehand. I was found out, and rightly so: that’s the reason audits happen.

    We were marked down for it, of course, and I’ll have to get things back into shape sharpish.

    Reading those words of mine just above (“that’s the reason audits happen”), I surprise myself with how true they ring.

    Have I finally come to accept them? And if so, how do I accept them? Gladly, or grudgingly?

    For years I’ve harboured a deep suspicion, a dislike of audits and what they stood for. For me, they stood for engineering by checklist, for doing rather than thinking, for rewarding completeness rather than innovation and - for the vast majority of my auditing experience - huge cleaning up operations for close to zero benefit.

    When is something that is good enough not good enough? When it’s being audited.

    I have experienced both sides of auditing; I have audited and have been audited. From being part of an auditing team, working alongside quite an enlightened auditing colleague, I understand that the mindset of an auditor should be a positive one, aiming to help the subject improve by pointing out the weaknesses and working on agreements to correct those weaknesses before they lead to genuine failures. This mindset should match that of the auditee. When both see the positives that can come out of the (like FMEAs) negative messages, then things are heading in the right direction.

    Nevertheless, audits are a not insignificant burden on everybody involved. Couldn’t we just wish audits, along with PPAPs, away?

    Well, not easily: auditing is a multi-billion dollar industry in its own right, valid across a whole spectrum of industries, and it’s a difficult edifice to start chipping away at. But even so: wouldn’t our engineering lives be so much more enjoyable without them?

    Initially, yes - they would be. We would be freed up again to design and develop as we know best: we know what our products are supposed to achieve and how to get them to that stage, even if not every Excel list has been filled out to the n’th degree en route. We could potentially become more like Google, where “...in the innovative and fast-paced world that [the Google developer] lives in, you get what you get.” (From How Google Tests Software)

    We would have more money and time to spend on D&D, too, not having to pay those auditing firms their crust or having to spend all of those man-hours preparing “just in case it’s audited”.

    But let’s look at it another way. Let’s say we want to start using a lower-cost supplier, more than likely in the old Eastern Bloc or somewhere (usually very large) in Asia. What are these companies like? Can we entrust our intellectual property, our quality and our good name to them? What better starting point could there be than searching for a certificate alongside customer references? (well, it’s true that there are differences in auditing rigour in China, even amongst the financial big four, as The Economist magazine writes)

    Audits cannot guarantee a good name, nor necessarily a good engineering company: there are firms with certificates on their walls that I wouldn’t wish on our fiercest competitors. In the same way that financial audits have missed gaping holes where the subjects have been playing the game better than they have - like Lehman Brothers, Enron and, it seems, Autonomy  - quality audits can almost be guaranteed to miss something big from time to time.

    Even the auditors themselves get themselves in a muddle - our December date with auditing destiny came about when the auditing company missed a submission deadline. This swiftly became our problem when our certificates were due to expire and our emergency re-audit date last December became our annual date. Thanks, we appreciate it!

    So, what are audits good for, then? Qui bono? For starters, audits are a reassuringly expensive starting hurdle to business: my industry - automotive - and many others have gotten themselves into a standardised twist, whereby an ISO / TS 16949 certificate is a prerequisite for supplying to an OEM. It’s a pay-to-play move gives potential customers a guide that the company won’t royally mess things up when they start a supply relationship.

    Audits also place a burden of duty and therefore responsibility on companies and their employees – from management right down to lab technicians – to get things right. Not only to “get things right” but also to “design things right”. This applies both to the product itself and to the process of how you get the product into a customer’s hands. Ideally, an audit should be imperceptible other than having to make some coffee for a visitor and answering a few questions. Why should you have to prepare if you are living the systems that you have declared fit for your own purpose?

    Umm, too much other stuff to do, perhaps? Not enough time to focus? Not enough mental energy left for yet another list-trawl?

    Well, if audits and all the stuff that we have to prepare for them really are a burden, then - again, ideally - they should become the impulse for genuine improvements in the way we work, in the way we communicate and collaborate. All of that form-filling, report-writing and change log management should have a genuine purpose, even if it is occasionally completed in the grudging spirit of passing an audit. All of these items are part of the company's index of information. When we change and update those forms, we are changing history, improving it. It's about creating a legacy, hopefully one of that will make sense of our successors' past.

    The one thing that can make audits bearable is for everybody involved to treat it as a human thing - checks and balances are inevitably required whenever human endeavour is at work, so go with it. Let the auditors ask the good questions and let them discover how you work - even I with my DFMEA fail this time around will have shown that overall we’re working well and are going in the right direction. So, I’ll take that “nonconformity” hit and try to improve on managing my projects along with managing everything else, and let’s see if we can find some mental space to put to use on streamlining our work so we can do better next time.

    And so back to my initial question: do I accept audits gladly or grudgingly? Well, of course it’s still the latter, but at least it means that I aim to keep them as low profile as possible: for that, though, I’ll need the support of my management – and I can assure you that the audit result was a wake-up call for them, too. Perhaps better things will come of it (or perhaps more oversight and review meetings – still, they’re a way of switching the focus to projects).

    One final note on all of this: I don’t recall ever hearing anything about audits when I was studying engineering at university. That’s something that should change (perhaps it has, already), as they are a real, if occasional and generally unloved, part of this engineering life. If the next generation of engineers know what’s in store for them, they’ll know to focus on how they work as well as what they actually work on.

    → 6:19 PM, Dec 13
  • Ishikawa's stinking fish

    Quality issues cannot be counted amongst my favourite activities. They can normally be categorised as "urgent-uninteresting", which is just about the best demotivator I can imagine. They're negative, cause huge floods of emails, assumptions, obfuscation and general panic. Some people thrive on this sort of situation. I, generally, don't, as was again proven by a quality concern with some Chinese colleagues.

    I get involved simply because our Tech Centre has the best kit, so we can test what others can't. It's annoying, because development people rather prefer looking forwards than downwards at self-shot feet.

    Nevertheless, some quality issues are useful ("never let a crisis go to waste" and all that). Some are excellent impromptu team-building exercises and others simply turn up some interesting artefacts, like this beauty below. It stopped me in my tracks - never have I seen an Ishikawa diagram illustrated so literally as by my Chinese colleagues...!

    China Ishikawa Fishbone Diagramme

     

     

     

     

     

     

    For those not yet in the know, the Ishikawa, or fishbone, diagram is a way of formalising the investigation into the potential causes of a particular issue. It's a methodology that forces you to look at each the 6 M's (others call 7 or even 8) in order to gain the full picture of what might have gone wrong to cause the issue (sorry, problem) that we're working on (the Environment one is clearly an awkward 'M-ification' for the purposes of alliteration):

    Machine (technology)

    Method (process)

    Material (Includes Raw Material, Consumables and Information.)

    Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions

    Measurement (Inspection)

    Milieu/Mother Nature (Environment)

    I can't tell you precisely what the 5 Chinese characters represent in this one. Whatever the causes of this particular quality issue, the discovery of this putrid gem of a rotten, stinking fish amongst the rotten, stinking debris of a quality concern almost made up for it...

    → 1:48 AM, Oct 6
  • RSS
  • JSON Feed
  • Micro.blog