← Home About Archive Dramatis personae Reading Photos Replies Also on Micro.blog
  • 'Tis the season to be... audited

    originally posted on one of my several now defunct blogs, called On Engineering, on 13th December 2012

    December. Time of cheer and good will to all colleagues, rushing for presents, updating projects, Glühwein, clearing out inboxes, eating far too much chocolate, finalising reports and… 

    And getting audited. 

    Yes, we were audited this week and one of my projects was in the spotlight. It was all going swimmingly until the auditor heat-seekingly locked on to one particular thread of my project that wasn’t really parcelled up and tied with string - ironically enough, the DFMEA.

    Being shown up as lax in my own project was certainly embarrassing, one of those half-expected shocks to the system; I felt a bit like a child hoping rather than expecting not to be found out about those stolen chocolates. I was hoping rather than expecting to be able to skim over that the incomplete DFMEA (structure present and correct, values not), knowing that it was a weakness without really having polished it off beforehand. I was found out, and rightly so: that’s the reason audits happen.

    We were marked down for it, of course, and I’ll have to get things back into shape sharpish.

    Reading those words of mine just above (“that’s the reason audits happen”), I surprise myself with how true they ring.

    Have I finally come to accept them? And if so, how do I accept them? Gladly, or grudgingly?

    {Audits stand…} for engineering by checklist, for doing rather than thinking, for rewarding completeness rather than innovation

    For years I’ve harboured a deep suspicion, a dislike of audits and what they stood for. For me, they stood for engineering by checklist, for doing rather than thinking, for rewarding completeness rather than innovation and - for the vast majority of my auditing experience - huge cleaning up operations for close to zero benefit.

    When is something that is good enough not good enough? When it’s being audited.

    I have experienced both sides of auditing; I have audited and have been audited. From being part of an auditing team, working alongside quite an enlightened auditing colleague, I understand that the mindset of an auditor should be a positive one, aiming to help the subject improve by pointing out the weaknesses and working on agreements to correct those weaknesses before they lead to genuine failures. This mindset should match that of the auditee. When both see the positives that can come out of the (like FMEAs) negative messages, then things are heading in the right direction.

    Nevertheless, audits are a not insignificant burden on everybody involved. Couldn’t we just wish audits, along with PPAPs, away?

    Well, not easily: auditing is a multi-billion dollar industry in its own right, valid across a whole spectrum of industries, and it’s a difficult edifice to start chipping away at. But even so: wouldn’t our engineering lives be so much more enjoyable without them?

    Initially, yes - they would be. We would be freed up again to design and develop as we know best: we know what our products are supposed to achieve and how to get them to that stage, even if not every Excel list has been filled out to the n’th degree en route. We could potentially become more like Google, where “…in the innovative and fast-paced world that [the Google developer] lives in, you get what you get.” (From How Google Tests Software)

    We would have more money and time to spend on D&D, too, not having to pay those auditing firms their crust or having to spend all of those man-hours preparing “just in case it’s audited”.

    But let’s look at it another way. Let’s say we want to start using a lower-cost supplier, more than likely in the old Eastern Bloc or somewhere (usually very large) in Asia. What are these companies like? Can we entrust our intellectual property, our quality and our good name to them? What better starting point could there be than searching for a certificate alongside customer references? (well, it’s true that there are differences in auditing rigour in China, even amongst the financial big four, as The Economist magazine writes)

    Audits cannot guarantee a good name, nor necessarily a good engineering company: there are firms with certificates on their walls that I wouldn’t wish on our fiercest competitors. In the same way that financial audits have missed gaping holes where the subjects have been playing the game better than they have - like Lehman Brothers, Enron and, it seems, Autonomy  - quality audits can almost be guaranteed to miss something big from time to time.

    Even the auditors themselves get themselves in a muddle - our December date with auditing destiny came about when the auditing company missed a submission deadline. This swiftly became our problem when our certificates were due to expire and our emergency re-audit date last December became our annual date. Thanks, we appreciate it!

    So, what are audits good for, then? Qui bono? For starters, audits are a reassuringly expensive starting hurdle to business: my industry - automotive - and many others have gotten themselves into a standardised twist, whereby an ISO / TS 16949 certificate is a prerequisite for supplying to an OEM. It’s a pay-to-play move giving potential customers a guide that the company won’t royally mess things up when they start a supply relationship.

    Audits also place a burden of duty and therefore responsibility on companies and their employees – from management right down to lab technicians – to get things right. Not only to “get things right” but also to “design things right”. This applies both to the product itself and to the process of how you get the product into a customer’s hands. Ideally, an audit should be imperceptible other than having to make some coffee for a visitor and answering a few questions. Why should you have to prepare if you are living the systems that you have declared fit for your own purpose?

    Umm, too much other stuff to do, perhaps? Not enough time to focus? Not enough mental energy left for yet another list-trawl?

    Well, if audits and all the stuff that we have to prepare for them really are a burden, then - again, ideally - they should become the impulse for genuine improvements in the way we work, in the way we communicate and collaborate. All of that form-filling, report-writing and change log management should have a genuine purpose, even if it is occasionally completed in the grudging spirit of passing an audit. All of these items are part of the company’s index of information. When we change and update those forms, we are changing history, improving it. It’s about creating a legacy, hopefully one of that will make sense of our successors' past.

    The one thing that can make audits bearable is for everybody involved to treat it as a human thing - checks and balances are inevitably required whenever human endeavour is at work, so go with it. Let the auditors ask the good questions and let them discover how you work - even I with my DFMEA fail this time around will have shown that overall we’re working well and are going in the right direction. So, I’ll take that “nonconformity” hit and try to improve on managing my projects along with managing everything else, and let’s see if we can find some mental space to put to use on streamlining our work so we can do better next time.

    And so back to my initial question: do I accept audits gladly or grudgingly? Well, of course it’s still the latter, but at least it means that I aim to keep them as low profile as possible: for that, though, I’ll need the support of my management – and I can assure you that the audit result was a wake-up call for them, too. Perhaps better things will come of it (or perhaps more oversight and review meetings – still, they’re a way of switching the focus to projects).

    One final note on all of this: I don’t recall ever hearing anything about audits when I was studying engineering at university. That’s something that should change (perhaps it has, already), as they are a real, if occasional and generally unloved, part of this engineering life. If the next generation of engineers know what’s in store for them, they’ll know to focus on how they work as well as what they actually work on.

    → 7:37 PM, Dec 13
  • On cracking up

    originally posted on one of my several now defunct blogs, called On Engineering, on 25th September 2012

    Photo: Corrosion Doctors / Metallurical Technologies

    A little snippet of what I learned at university, residing deprecated in my skull, all of a sudden, during one the most boring of tests that we could think of for a component, became relevant and real. A part cracked during corrosion testing. Then another did, and another - and we had an issue, as they say in the modern parlance.

    It was one of those things that, as soon as they occur, make you think “of course! Why didn’t we think of that before we started testing?” To which the answer is of course: because we didn’t know that we needed to think of it.

    It was a lovely, clear case of stress corrosion cracking.

    We were looking at making the switch from steel to aluminium for some joint components. We knew about some of the ways that Al differs from steel (no definable infinite fatigue life, galvanic potential in corrosive environments, for example) already. The test plan that we came up with was a fairly standard one that would more or less tick boxes so that we could introduce the product with a customer.

    And then the stress corrosion cracking appeared. Fortunately, we had for various reasons chosen to test two different Al alloys - and sure enough, only one of them was affected by SCC. A quick Google search confirmed our findings: Al 7075 T6 is highly susceptible to SCC, whilst the 6xxx alloy that we also tried is not. You can read some of the literature we found here and here.

    It turns out that the alloying elements (Zn, Mg and Cu) that make up 7075 T6 and its kin precipitate out to the grain boundaries during the T6 grade heat treatment. This inter-metallic phase forms an anode to the grains’ cathode, promoting corrosion along the grain boundaries (from a good discussion on Engineering Tips, here).

    They also create inherent weaknesses in the overall structure. With a component under permanent stress (in our example we’re looking at around 10 kN clamping force), corrosion running between the grains uncovers a path for a crack to propagate.

    Funnily enough, the crack wasn’t catastrophic, at least not in our nice, stable laboratory conditions: the joint was still tight. However, adding typical vehicle loads and system pressures to the joint would almost certainly lead to a reduced component life.

    So - it’s clear: don’t use Al 7075 T6 in applications that can be exposed to corrosive environments whilst under load. The biggest question now is how to ensure that future Canny Engineers don’t fall into the same trap that this one did and end up with them feeling less canny than ever.

    One part of the answer lies in the most obvious place of all, the drawing. But only part of the answer… We will of course remove Al 7075 T6 from the drawing and keep our Al 6xxx material on there. As this is a real product change, we will have to update the revision level on the print and obtain approval for that change. This means that we will enter our change request procedure, filling out the associated form. There, we will write a few lines about SCC, make reference to the test report - and that’s that. The change request document will be referenced on the drawing (remember to obtain the CR number before changing the drawing, then!), so the link is made. Anyone interested in the history of this part can, with a little digging, find the reason why we eliminated Al 7075 T6 from our prints.

    Great - but that’s hardly a robust way of ensuring that future engineers will know to avoid it. They would have to chance upon the one print that was involved in our original “discovery” of SCC, when all other variants of this component would no longer have any reference to this change, because they would of course have been designed correctly from the outset… Until our future canny engineer says “hold on, why not use 7075 T6? It’s stronger, so we can use less of it… etc)” How do we as ghost of his past tell him that he’s in danger of no longer being canny?

    This is where company Wikis, bodies of knowledge, collections of basic data, whatever you call it, need to be working well. Just relying on Google doesn’t work: I just put in “Al 7075 T6” into the search box and came up with 935000 results, and in the first several pages it all looks great - the perfect material for our application; which, of course, it was for us, until it wasn’t.

    What I have started to do is to set up a technical Wiki for precisely such matters. It’s sparsely populated, and there are currently no other users - and the management don’t seem to have understood it - but it’s worth persevering with into 2013 so that my colleagues in 2031 don’t have to go through the stress of corrosion cracking.

    Assuming that the system that runs my wiki is still extant…

    → 10:42 PM, Dec 10
  • Book Review: How Google Tests Software

    originally posted on one of my several now defunct blogs, called On Engineering, on 14th November 2012

    Have you not noticed a book recently? Forgotten that you were reading whilst you were reading? That’s the author’s ideal: their books should melt away whilst you are reading them, so that the content transcends the medium and becomes the event.

    Can this happen with a technical book? Honestly, I don’t believe it can; technical books are so full of references, tables and figures, footnotes and diagrams that you can not escape their structure, their architecture for long. I could briefly get lost in an alloy phase diagram in “Engineering Materials”, but I couldn’t read the book page for page, for hours on end like I could a Julian Barnes or an Iain M. Banks.

    An engineer’s job does (or at least should) include reading up on things, whether that be a new book or browsing the web for information. This being an engineering blog, I thought the occasional review of interesting resources that I have encountered might end up being something that I could write about. This is the first in this unforeseeably long or short series or reviews.

    The book that kickstarted this whole thought process was one I came across as background reading for my post on whether Software Engineering is Engineering: it was the ebook How Google Tests Software

    How Google Tests Software (HGTS) was written (developed and compiled, perhaps?) by three gurus in the art of software testing: James Whittaker, Jason Arbon and Jeff Carollo. In style, it is what could be expected of Google from an outsider’s viewpoint - quite chatty, breezy, somewhat at odds with the incredibly technical and mathematical work that they do. It is also replete with excellent word selection, suggesting that whilst coding is at the heart of their work, this trio is also at home communicating with people. Indeed, being bright and capable of communication is a key aspect of their respective rises to the upper echelons of Google (and, in Whittaker’s case, Microsoft) management. James Whittaker certainly has literary form, having written “How to break software” and “Exploratory Software Testing” prior to HGTS.

    In truth, and from my perspective thankfully, HGTS is only semi-technical. There is not much in the way of code snippets or significant jargon; it’s more a case of using dialect (“dog-fooding” for internal pre-Alpha software testing, for example). The book reminded me a little of the classic aerospace book “Flight without formulae” {Link} in that there is a minimum of code and a maximum of description. This suited me down to the ground. Someone in the software development world may be disappointed at not having chunks of test code to try to understand or to try out, but this book describes in a lively way the key principles of how to manage testing, how to manage testers and how testing has to become integrated both into the product and into the company itself. This makes the book worthwhile reading for software developers, I’m sure - but also for us.

    The essential message of the book is entirely relevant even at my mechanical end of the engineering spectrum: it is that {software} testing and quality must go hand in hand with development.

    In the book, we learn how Google went so far as to kill off the group called “Testing Services” and to resurrect it as “Engineering Productivity.” More than merely a rebranding, the switch ensured that the software developers were testing their new code all the way through the development process: the Productivity Team gave them the tools to do so.

    Software testing consists of several levels, from quality checks on portions of code, through to logic and functionality tests on components and upwards to full interdependent systems and finally user testing.

    Equally, there are several levels of test engineer involved: there is the SWE (Software Engineer), who principally develops code, but also tests the same code for “low-level” bugs. There is the SET, the Software Engineer in Test, who aids the SWEs in writing test code and the frameworks for such testing, and finally there is the TE, the Test Engineer, who is involved in the user-side testing of an app or a site.

    The test team is kept small by design, making it a limited resource that thereby keeps a large enough balance of responsibility on the side of the SWEs to keep things as bug-free and as smooth as possible. The idea is that if Testing were to become a huge department, like in the bad old days, software quality would become worse, not better, since SWEs would once more feel released from the constraint of having to consider testing and quality as being an integral part of what they create. Google (as would any other company) would slow down to become a bureaucratic monster, no longer nimble, no longer smart.

    The sheer complexity of what the testers do is incredible and totally beyond my ken. Tests that range from small to enormous, automated bots that trawl websites for bugs, whole tracking systems for bugs: these are all impossible creatures for me.

    Intellectually, though, they become analogies and hints for improving our own ways of working. Let’s take the structure: We have technical, production and quality departments: why not eliminate the quality department as we know it and create a Productivity Improvement Team? Indeed, why is quality treated separately? If we focus on productivity, we automatically have to eliminate quality issues. Google tracks bugs with Buganizer? Well, we could move on from quality catalogues (aka rogues’ galleries) to active tracking and destroying of our own quality stumbles - for everybody. Google trawls websites for usability issues? We could do much more collecting of warranty and benchmark data for our parts and those of our competitors. Google raises bug alerts on competitors’ sites? Hmm, well, perhaps that’s an analogy too far, but the notion of making our industry a better place is a noble one.

    Google uses what they term the “ACC” Analysis methodology, where teams think through Attributes, Components and Capabilities to determine an initial test plan for that product for each instance where a component is broken or a capability not met. That is, they think through what would happen and how a user would be affected if a particular component were suboptimal or broken, and assess how frequent that type of failure would be. It all sounds very similar to the FMEA methodology in our world.

    Tellingly, though, Google doesn’t seem to let itself get bogged down in documentation or specifications. “…I suppose there is some fairytale world where every line of code is preceded by a test, which is preceded by a specification. Maybe that world exists. I don’t know. But in the innovative and fast-paced world that I live in, you get what you get. Spec? Great! Thank you very much, I will put it to good use. But… demanding a spec won’t get you one… Nothing a spec writer … can do will help us find a problem that a real user will encounter.” 

    I would be interested to know if Google needs to pass audits in the same way we do.

    Google can be very clear on how it should manage clever people: “…I am a big believer in keeping things focused. Whenever I see a team trying to do too much, say working on five things and doing only 80% of all five, I have them step back and prioritise. Drop the number of things that you are doing to two or three and nail those 100%.”

    So - the way Google has set up its development teams with quality at their heart, then set up productivity teams that provide the tools for quality to succeed sounds like a benchmark for us to meet.

    These and many more are the lessons to be drawn from How Google Tests Software. I would certainly recommend you delving into the book for even more on how to recruit clever people, how to work with barefoot managers, and how to ensure that jobs and roles are not entities in themselves, but part of a community in their own right.

    Could Google learn something from us? Well, if Google really wanted to know how to bog itself down in administration, they could always learn from us and introduce PPAPs to the software world. That would help, I’m sure.

    And: did you not notice your browser there for a few minutes? If so, then it’s a sign that this post was in some way or other interesting; if not, then you probably disappeared down a few tangents via those links - the very nature of blogs and the internet (or your browser crashed…)

    → 6:42 PM, Nov 14
  • The numerical pitfalls of engineering in Germany

    originally posted on one of my several now defunct blogs, called On Engineering, on 25th September 2012

    The relationship between engineers and numbers is often an uneasy one. Engineers are by and large mathematically literate after all those years at university, but we don’t necessarily feel at home in the world of maths. It’s a subject that we feel is to be dropped at the earliest opportunity.

    Once we enter employment, we don’t need maths anyway. We learn the formulae and relationships that are pertinent to the subject matter at hand - and forget the rest. If we do happen to need something from “the rest”, we generally know where to unearth it and, after some thought, can apply it.

    My own mathematical world is rather limited. I need to interpret test data, certainly, but have found that it’s the qualitative information that I extract that is useful rather than any best-fit curves or dynamic equations of state. I do sometimes need to calculate friction coefficients, but since the formulae are simple enough to encapsulate in a spreadsheet, I don’t actually need to know what precisely those formulae are (but I can find them if required).

    If mathematics is one aspect of this relationship between engineers and numbers, numeracy is the other. Whatever results we get out of testing, or whatever design information we wish to convey, I need to talk numbers with colleagues, suppliers or customers. Given that I’ve living and working in Germany, these discussions often take place in German. Now, whilst I’m pretty good at the language from a linguistic standpoint, I have a real problem with German numbers.

    I’d like to point out at this juncture that I was never bad at simple additions and subtractions in English. But the quirks of the way the German language treats numbers make me stumble when naming or hearing numbers in isolation and often I’ll simply give up if I need to engage in a little mental arithmetic.

    The problem is that German numbers are - partially - enunciated backwards.

    If I want to say the number 65 in English, I have a nice mental image: sixty-five - 65. However, German tells us the smaller number first: fünf-und-sechszig. This means that my innate numerical image is messed up and I need to hold figures in a buffer before I can complete my own natural image: (5)´6 -> 65. 

    My internal workings are a very fast version of this:

    “OK, he’s just said the number five, and there’s an “and” being said, so there will be a number coming before it… So, what is it, then?… Wait for it… Ah, OK, so it was a sixty. What was the first number again? A five. So, that sixty was the tens digit and five was the unit digit, so he meant to say sixty five.”

    That usually works, as I say, when numbers are mentioned in isolation. But having to perform arithmetic with them seems to be a mental effort too far.

    “So, he wants me to take five and sixty, then subtract eight and twenty. So that means… scramble…scramble….seven and thirty.” My German colleagues, who have grown up with this nonsense, can cope with this much better than I have been able to and are therefore usually much faster at computing the answer and beat me to it, meaning that, after a short while, I gave up even trying.

    The mental effort is compounded by larger numbers where the hundreds are spoken first, then come the units, followed by the tens. So these end up as:

    “Six hundred two and fifty.”

    My internal numerical imaging is so strong that as well as my problems in parsing and understanding the numbers I hear, I have problems in saying them, too. This has caused at least one near miss in my time as an engineer in Germany, where I told a prototype builder to make something one hundred and fifty eight millimetres long, rather than one hundred five and eighty millimetres. Thank goodness for sketches and prints, is all I can say to that.

    I know German primary school teachers who have confirmed that the German numbering system really does create difficulties for children at school - but given that quite a lot of German engineering seems to work reasonably well, it does not seem to be a handicap for life.

    Other languages have their idiosyncrasies, notably (in my experience, anyway), French with its sixty-fourteen (for 74) and four-score-and-three (for 83): but since these quirks are still sequential, I can cope (but still hope that the Belgian “septante, octante” and “nonante” take over the francophone world). Maltese numbers are so complicated that they have been almost completely done away with and replaced by the English; but there is even a slight hiccup in logic in English: “thirteen”, “fourteen” and so on are a version of the German, with their singles being named first. But it’s such a short sequence of exceptions to the rule that I would treat them along with “eleven” and “twelve” as practically being units in their own right.

    If you’re a German reading this, I’d be fascinated to know what your mental processes are for this, and whether you feel that you’re at some kind of mental processing disadvantage because of it; or even if you feel that it’s a kind of mental brain training that gives you that additional edge.

    If you’re an Asian right-to-left reader reading this, I wonder if your mental image of numbers is different to mine, or whether the western numeral notation system has become so dominant that you think sixty five as a six followed by a five.

    Now of course, it can sometimes simply not matter which way the numbers crop up…

    554 + 445 = 999

    545 + 454 = 999

    … But by the time I’ve worked that out, it’s too late: I’m frazzled and am looking for the nearest pen - or a cup of coffee.

    → 6:45 PM, Sep 25
  • The value of an FMEA seminar lies not (only) in the presentations

    originally posted on one of my several now defunct blogs, called On Engineering on 25th September 2012

    Engineers are, by all accounts, a fairly unsociable lot. That’s of course not to say that we’re particularly obnoxious in any way - it’s simply that engineers have not, over the decades, dispelled the notion that we are difficult colleagues. We’re not great at meeting people, we dislike meetings, can’t spell and can’t express ourselves with any degree of fluency. Yes, we’re generally capable of having families and we do grudgingly recognise the need for working with others but we communicate at best wordlessly, through diagrams and drawings, prototypes and graphs, with equations if we’re showing off. Thrown into the deep end of human interaction - with real live people - we flounder a little, then try to escape into our own little air bubbles, wide, panicked eyes magnified by refraction.

    I threw myself into the deep end last week by attending a two-day seminar on FMEAs run by the software company APIS, which makes the rather special IQ-FMEA product family. I survived the experience. And it seems that most of the others that attended did, too. Whilst I can’t completely repudiate the notion that engineers can be a little insular or initially difficult to connect with - well, that’s human nature at work rather than the type of human in the conference.

    The seminar was held at Maritim hotel in the lovely town of Würzburg in the Franconia region of Bavaria (there lurks a lot of history behind that statement, meaning that the Franconians don’t appreciate being called Bavarian). The APIS team organised the conference very well indeed, including some opportunities to get to know the town, its history and its culture, in particular its wine.

    So, you’re thinking: wine, history, culture, coffee. But how was it even remotely possible to fill two days with lectures and presentations on this one, dry old topic, whose output is typically a spreadsheet used only to pass audits? How can over 200 people gather to discuss the FMEA?

    It’s true: most FMEAs are merely vestigial remnants of a potentially great tool. Most companies get away with paying the merest lip-service to them (they have to, in order to pass audits), as they can often rely on long experience in designing and producing their product - or they are sufficiently fleet of foot (and well funded) that mistakes can be made and quickly rectified. Yet the FMEA, like many tools, is there for a purpose and, used properly, can lead to surprising revelations and to a fundamental understanding, including a detailed library of lessons learned on your own product. The FMEA is worth exploring and talking about.

    So, how were the two days filled, other than with coffee breaks and lunch? With presentations and - most importantly of all, during those self-same coffee and lunchbreaks - talk.

    As a quick background of what was presented, here’s a little taster:

     “FMEA-Lite” by a representative of Autoliv, the safety equipment manufacturer, who admittedly made the FMEA-Lite look like a thin filling of a very chunky sandwich: the FMEA portion may have been light, but it was surrounded by fat block diagrams and manually created Excel robustness management matrices that looked far more complex than their potential benefit could ever merit. A pair from Continental (one of whom was an ex-developer from APIS) who shared their experience and advised on the Simultaneous Engineering of FMEAs - disparate groups of people working on different portions of a larger FMEA at one time. There was a critique of the wording within the latest VDA guidelines from a chap from Festo, and an FMEA consultant / trainer introduced his thoughts on how FMEAs can be effectively be implemented in today’s ever more complex mechatronic systems. There was even a light hearted and entertaining introduction to the workings of the brains of FMEA moderators (those people who run the software, run the meetings and therefore need to be attuned to the personal and emotional signals and needs of the participants, no matter how grouchy, quiet or aggressive they may be - i.e., often engineers dropped into a very non-engineering style role). This being a company-run seminar, we also received some insights into the future of the IQ-FMEA software itself from APIS (“we like to stay around 5 years ahead of the competition)

    So, how did I get on in the world of conferencing? Not great, to be honest. I came away with a mere two new business cards, though also with a few undocumented discussions with representatives from BASF automotive coatings, Siemens automation and Kärcher (they of the pressure washers and much more). But I didn’t break into any cliques. Reading down the attendees list, I would say that well over half of the representatives were there amongst company groups: Continental, Daimler, Magna, Bosch and so on. I don’t recall meeting a single first-timer like me, either, so cliques were both inevitable and slightly difficult to break into - for an engineer like me: the value of conference networking stems from the second time onwards, when people vaguely recall my face, remember that I was kind of OK to talk to.

    I’ll dedicate a future post or two to FMEAs themselves. For now, though, it’s good to have made that first step into the conferencing world…

    → 6:23 PM, Sep 25
  • Engineers: are we but droplets in the cloud?

    originally posted on one of my several now defunct blogs, called On Engineering on 17th August 2012

    When trawling the net and various books online for background to my post wondering whether Software Engineering is Engineering, I came across a book on how to teach software engineering (its name, like this clause, would only interrupt the flow of this post). I was only afforded the preview on Google Play (OK, I didn’t buy the book), but one phrase I came across intrigued me, since it gets to the core of my thoughts on this blog. The phrase is this:

    “Software engineering - the “engineering” of software - is part process, part technology, part resource management, and, debatably, until recently, part luck …. Learning to be a software engineer - learning about software - learning about engineering (the former, a nebulous topic, the latter an equally nebulous attitude of professionalism) form the target that educators are aiming to hit…”

    Or, paraphrased: “Engineering is a nebulous attitude of professionalism.”

    I think that’s a fabulous non-description, but it raises some interesting considerations, as that word nebulous - cloudy, vague, formless - bears so much information and insinuation. The word implies that engineering can be observed and classified but only billows around a probing grasp. It implies that the macro and the micro definitions of engineering are completely different: in the same way that clouds are made up of a myriad of droplets and the nucleae of those droplets, engineering is made up of myriad interconnections and dependencies. It’s what makes engineering so potentially fascinating and so potentially frustrating.

    Instead of trying to capture all of those influences in words, I decided to resort to the prototypical engineering fallback tool - a sketch. It’s more of a brainstorm than anything defined, though: it’s nebulous, made up of lots of droplets and is liable to change at any moment. Here’s what it looks like today:

    I’ll keep refining it, but you get the picture. The form of your own particular cloud depends entirely on your engineering environment and whichever way the winds of development and commerce blow. Is engineering unique in this respect? Undoubtedly not - there are many more nebulous attitudes of professionalism - but it’s a good thought-raiser.

    And there’s one thing that the nebulous analogy misses entirely: clouds don’t produce paperwork.

    You may use the picture for your own devices under a kind of CC license: Common Courtesy. A simple link and acknowledgement would be appreciated!

    → 9:07 PM, Aug 17
  • Is software engineering engineering?

    originally posted on one of my several now defunct blogs, called On Engineering on 7th August 2012

    Software seems to be getting all the glory these days, with the notable exception of the Curiosity landing - but any system that uses a a rocket crane to gently place a one tonne nuclear rover into a crater on Mars is astounding. Aside from the MSL, though, it’s all Facebook this and Google that - even Microsoft, the uncoolest of them all collects kilometres of headlines. I get the impression that engineers like me, working on “things” like metals, coatings, fluids, remain unlauded. In modern parlance, I work on “dumb*” things. They are non-trivial things, of course, otherwise I wouldn’t be engineering them, the products I work on also have many millions of users and the company I work for even makes a profit - but it’s not software.

    The world of software deserves its acclaim. The engineering that I do could hardly be imagined without IT. Spreadsheets and presentation tools, web browsers, emails, data analysis software all across the spectrum to text messages are an integral part of my working life. In one sense, then, software is “merely” a tool that enables me to add value to things. Equally, I am aware of the tools that I use when I’m not in the office: music sequencers, smartphones, GPS systems - and blogging apps, of course.

    All of this software resides in hardware, but in many cases the physical is largely transparent. Software defines the utility of the hardware.

    So software is one of modern life’s key enablers. It can be stunningly complex and is in a perpetual state of development (unless the company goes bust or is bought out for its team). The question is, though: is software engineered, or does it somehow “happen”?

    Put another way: if I were to dust off my Basic or my Turbo Pascal and hop over to a software company, would I recognise what I would do there as engineering?

    The title Software Engineer certainly exists. It can be found in the job pages of Facebook and Yammer. There are university courses offered in Software Engineering the world over. There is a Software Engineering Institute at Carnegie Mellon, and the Fraunhofer Institute has its own Experimental Software Engineering group.

    Yet despite all of this apparent validation, the title still seems diffuse and interchangeable. Some companies avoid the title Engineer altogether, using by preference the word “Developer”, which seems currently to have the highest cachet, whether the practitioner is Junior, Senior, Expert or Chief Expert. A developer friend tells me that where he works, the title “engineer” is not used at all, as it smacks of robust inflexibility grounded by paperwork, whereas developers are by nature free to react quickly and autonomously to the ever-changing requirements and bug discoveries that define software.

    I see what he means (and take slight, but acknowledging umbrage at that assessment of engineering). But others use the title “Engineer” as a standard moniker - including Google, of all places. So how do they use it?

    James Whittaker (now back at Microsoft) in his highly engaging book “How Google Tests Software” describes many of the development tools used by Google during software development. They seem to be parallels of my own tools. He talks about specifications, about a kind of FMEA (risk derived from what Google calls ACCs - Attribute / Component / Capability factors), about test and validation, about breaking things to find their weak points and subsequent focus on fixing those areas.

    A Google Software Engineer (also described in the book as a “feature developer”) is responsible for delivering tested, bug-free code to a particular project. Software Engineers in Test are geared up to write code and test-frameworks to find bugs in the product, and Test Engineers work specifically on ways to break the total product in clever ways.

    It all sounds quite similar to my world. Instead of code, I write drawings and specifications. I organise testing and validation, I have to deal with change. Our manufacturing engineers ensure that product can be made and our quality engineers ensure that product is measured and released for sale. However, whilst specifications, documentation and requirements are all present and largely correct at Google, they come across as being secondary to the ultimate goal of shipping bug-free code.

    This is of course totally true. They are secondary (I shudder when I hear Quality Managers refer to what they do as “value add.” It’s cost-added for value saved.) However there is a different emphasis on rigour between software and hardware that may be reflected in the real titles of hardware engineers but software developers.

    One of the directors quoted in How Google Tests Software explicitly states “I suppose there is a fairytale world where every line of code is preceded by a test which is preceded by a specification… But in the innovative and fast-paced world that I live in, you get what you get… Demanding a spec won’t get you one… I can whine or I can add value.” Equally apposite: “Test plans are the first testing document to be created and the first one to die of neglect.”

    These statements reflect the same pressures that I experience as a mechanical engineer. We also have timing pressures to deal with, and spec writing is also a necessary evil. But the attitude seems different. I simply could not imagine such a statement coming from a GM or Daimler director, let alone from that great automotive bureaucracy, Volkswagen.

    Documents and specifications aside, subtly but tellingly, in a series of interviews with Google Test Directors at the end of How Google Tests Software, each director refers automatically to developers and only occasionally use the word “engineer” as a secondary term.

    So perhaps engineering is a nice-to-have concept in the world of software, a little bolted on. On the other hand, we engineers may be too static and outmoded for the modern and fast-paced world of gold-medal software firms like Google. Perhaps our production models that involve factories, process engineers and ISO / TS audits are too rigid to take the liberties that the softies can take and get away with them as often as they do.

    But as we have seen, the title Software Engineer is very much in existence. Maybe we need to take a step back from software’s cutting edge to where software takes a secondary seat to the hardware. Car or aircraft entertainment systems, or production process control systems would be good examples, as would be the medical equipment industry.

    The clearest answer I have found so far to the question “Are Software Engineers Engineers?” lies in a job description for a medical equipment manufacturer. Here’s what this software engineer is supposed to manage:

    Development of software Verification…of Quality Management and Regulatory Affairs Collaboration for the development of software requirements Development of the software architecture Implementation and integration, supervision of external resources Support of product maintenance Production and customer care

    This collection of responsibilities sounds more like what I have to manage on a daily basis. This software engineer must juggle the code and its application, must (this being the medical industry) monitor specs and regulations carefully and must ensure that production is secured, whilst also designing in a certain ease of use for the end user (I wonder if they say “end user” or “patient”? I think it makes a difference…).

    It doesn’t sound as sexy as a startup’s freedom or a Google’s heavyweight fleetness of foot, and it doesn’t reflect much of the pioneering spirit of a Brunel or an Edison; but it’s engineering as I know it.

    Perhaps the difference between the software developer and the hardware engineer really is as simple as the maturity of the market and of the company. Just as terrible auto accidents in the 1970s and 80s resulted in ever-increasing regulation, so potential privacy disasters at Facebook and Google is landing them with audits and governmental control.

    Perhaps the Zuckerbergs and Larry Pages of today are the Rolls and Fords of yesteryear, and their companies are destined to become as bureaucratic as their successful forebears. The attraction of startups is that they are small and start under the radar of heavy regulation. To achieve the scale and success of Microsoft or Apple, of BMW or General Electric, they too must generate a strong, supporting skeleton. The trick and the challenge is not to let that become a fossil.

    So in the end, what’s my answer? Is Software Engineering Engineering? Yes, it can be. There are sufficient differences that both worlds can learn from each other, even if they cannot often transfer people (I see myself more easily transferring to the nuclear industry than to Google) but the disciplines and tools involved have their parallels. In the balance, I feel that my world could learn more from software than vice-versa, especially in terms of sleekness and agility. What could they learn from us?

    Apart from great paperwork, I mean…

    I’d love to hear your thoughts on this theme, too. Are you a Software Engineer yourself? Or are you somewhere in between (in avionics, say, or electronic gadgetry? Fire away!

    *dumb: software is deemed smart, but it, too, can be reduced to equations and lots of “if…else” statements. Not as dumb as they sound, my components react to certain conditions in particular ways, and with more subtlety than many programmes do.

    → 5:38 PM, Aug 7
  • Ishikawa's stinking fish

    originally posted on one of my several now defunct blogs, called On Engineering on 10th May 2012

    Quality issues cannot be counted amongst my favourite activities. They can normally be categorised as “urgent-uninteresting”, which is just about the best demotivator I can imagine. They’re negative, cause huge floods of emails, assumptions, obfuscation and general panic. Some people thrive on this sort of situation. I, generally, don’t, as was again proven by a quality concern with some Chinese colleagues. I get involved simply because our Tech Centre has the best kit, so we can test what others can’t. It’s annoying, because development people rather prefer looking forwards than downwards at self-shot feet. Nevertheless, some quality issues are useful (“never let a crisis go to waste” and all that). Some are excellent impromptu team-building exercises and others simply turn up some interesting artefacts, like this beauty below. It stopped me in my tracks - never have I seen an Ishikawa diagram illustrated so literally as by my Chinese colleagues…!

    For those not yet in the know, the Ishikawa, or fishbone, diagram is a way of formalising the investigation into the potential causes of a particular issue. It’s a methodology that forces you to look at each the 6 M’s (others call 7 or even 8) in order to gain the full picture of what might have gone wrong to cause the issue (sorry, problem) that we’re working on (the Environment one is clearly an awkward ‘M-ification’ for the purposes of alliteration):

    Machine (technology)

    Method (process)

    Material (Includes Raw Material, Consumables and Information.)

    Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions

    Measurement (Inspection)

    Milieu / Mother Nature (Environment)

    I can’t tell you precisely what the 5 Chinese characters represent in this one. Whatever the causes of this particular quality issue, the discovery of this putrid gem of a rotten, stinking fish amongst the rotten, stinking debris of a quality concern almost made up for it…

    → 3:38 PM, May 10
  • Pass the resource - and spread thinly

    originally posted on one of my several now defunct blogs, called On Engineering on 7th May 2012

    Terracotta Warriors By xiquinhosilva - www.flickr.com/photos/xi… CC BY 2.0, commons.wikimedia.org/w/index.p…

    As an Englishman living and working in Heidelberg, I am often asked if I work for SAP, the business software giant based down the Autobahn in Walldorf. I don’t, of course. If I did, I’d be writing about software development, the management thereof and how utterly astounding their legendary canteen is.

    The question is not a daft one, though: around 10 thousand people work at SAP, forming a more or less willingly thrown together (or at least well-paid) melting pot of 80 nationalities with English as the working language. It’s easy to assume, then, that a middle-class, technically minded foreigner living in Heidelberg earns his crust and her Grauer Burgunders at SAP.

    I know a few people who work at SAP, and they are generally of a particular ilk (physicists and mathematicians, i.e., not my type) so I know that I don’t need to yearn to work there, but there’s one thing I envy them: resources for R&D. Globally, (2011 figures) SAP has 16 thousand employees working in R&D (12 thousand work in sales…).

    16 thousand in R&D…

    (drifts into a reverie)

    (Snaps back with a jolt)

    I’m not in that place. I am a development engineer at an automotive supplier; my development activities really only skim the outer atmosphere of the unique world of R&D. Is this a surprise? Is it a disappointment?

    Let’s think about the surprise factor first. When I consider my time at university, I don’t recall ever having heard the word “resource” discussed in any manner other than as a general term for learning material. We picked up the material, used the libraries and even the nascent internet; but I was never a resource myself.

    Resource was always a “hidden” theme. We were aware of time pressures with the need to study such a wide range of subjects whilst maintaining sanity and health through extracurricular pursuits, but it was always a case of everybody finding their own balance.

    My aerospace engineering course did involve one larger team design project that was formative, but as far as I could see, research projects could run in a business vacuum, free from excessive emails or telephone calls, requests for assistance from around the world and quality alerts to have to jump onto. Ah - there we go. Did you notice that word in there, mentioned for the first time in this post? Team.

    Entering the workplace was a relatively soft jump for me: I joined Ford in the UK which at the time was recruiting heavily - and, crucially, I joined a team. By that I mean there were several of us who could do each other’s jobs, if necessary and we worked both on improving the product and on improving the ways we worked on those products.

    Recently, I was part of a globally distributed team that developed a new procedure for drawing release and approval. The word ‘team’ in this case represents more a collection of perspectives than anything that could really work to fight the necessary fights. We had representatives from design, from quality, from manufacturing; but none of us were interchangeable in the way that my team position at Ford was.

    Developing the process itself was the fun and interesting part. I managed to find a spare license for Microsoft Visio (alas only the 2003 version which is now looking very old indeed) and used it to create the workflow. As it is a cross-functional process, the workflow runs in lanes - development, manufacturing engineering and quality - so the visual aspect was initially quite appealing. Adding the “if… then” loops and the different entry points for product at the various stages in their drawing lifetimes certainly de-streamlined it, though.

    But actually running the procedure has been no fun at all. The problem is that it was implemented as a project in itself without great consideration for the resource implications of centralising what was in effect a totally distributed (and therefore wasteful, if potentially evolutionary) process.

    So the current situation is that I am currently the only person in a company of over 16 thousand who can create the drawings that those 16 thousand people need.  Imagine SAP setting up a system whereby those 16 thousand in R&D could only work if I finished one particular task amongst many. It wouldn’t really make sense, and this is something that I’m struggling with at the moment.

    And so to my second question - am I disappointed not to be part of that resource pool? In a word, no. It’s harder to explain, but perhaps this is only because of a lack of imagination: I simply cannot imagine being able (or only being allowed) to work on one problem at a time. Whilst I am trying to relearn the art and discipline of focus, it doesn’t come naturally to me. I thrive on bandwidth, even if the transmissions occasionally get muddled up and jam because of it. I’m a variety type.

    Coming back to those cohorts of SAPlers, though: despite their numbers, there have been reports recently that even they are feeling pressured to breaking point in their jobs. Which means that you can spread even 16 thousand researchers thinly. R&D resources generally expand to fill the product lines a company is working on; even within the relatively low number of headline products SAP produces, there is a vast number of modules that require development and linking in to others, so there’s plenty for lots of clever people to do. (Google has around 33 thousand employees, Microsoft 92 thousand, of which 35k are in R&D).

    So today’s R&D behemoths are commercial, distributed amongst the universities and amongst the startups that thrive or die on their findings; I’m somewhere in no man’s land - where are you?

    p.s. Happy 40th, SAP!

    → 8:03 PM, May 7
  • In, On and Around Detail

    originally posted on one of my several now defunct blogs, called On Engineering on 3rd April 2012

    Source Wikipedia Ishihara diagram editor’s note: I’m colour blind, and genuinely cannot see which number is shown in this particular Ishihara test

    “It’s just a minor detail” is a commonly-enough heard phrase, with only two problems: the words “just” and “minor.” Take those two belittling extraneous modifiers out and say it again: “It’s a detail.”

    Often enough, putting a concept or even prototypes together is the easiest (and the most fun) part of a project. The main parameters are in place; detail is “the rest” required so that something that vaguely resembles a manufacturable, saleable product results. It’s those unknowns that need pondering, the “not quite rights” that need purging.

    Put romantically, not just devils but angels, cherubim, seraphim, orcs, gremlins and all sorts of other critters reside in detail. They battle continuously to make or break your product, whatever that might be. Wrong material or tempering grade? It’ll deform too soon, or just snap - if you’re lucky at the prototype stage. You got your tolerances wrong? They’ll bite you within three months of start of production, on a Friday afternoon before Christmas. Overspecified things to be on the safe side? Your product might be unfeasible to make, or too expensive. When you get the details right, however, that product can go out there and do what it’s supposed to do - satisfy customers and make money. So, we “just” need to get them right.

    The first thing to realise is that there are no “hero” details. Every item on a drawing or in a specification can lead to something going wrong, and every item that is not there can, too. Picking your way through the thicket requires concentration, discipline and focus, namely the three things that are almost impossible to come by in a normal working environment. So, alongside placing the spotlight on details, this post looks at how and when to work on details. Without thinking, problems don’t go away. Without doing, they don’t go away. Normally, one of those activities preceeds the other…

    One key way is thinking solo: giving yourself the time, room and environment to think. Daydreaming is often for me the best way of filtering out my surrounds and letting ideas or understanding drift into my conscious view, when I’m least expecting them. Then I at least know what I need to work on. Then it’s a case of separating myself from the others and going somewhere quiet to focus on something. Brainstorming with colleagues is another method - call a guerilla meeting, half an hour over a coffee, just before lunch to ensure the meeting finishes on time; longer-scheduled DFMEAs is another good way of at least discovering what needs to be determined and proven before a product hits the scrap bin more often than the “goods out” one.

    There’s a key difference between the solo and the team thinking methods: thinking is often misconstrued as “not doing anything” whereas a team meeting is by default acceptable, no matter how inefficient. Let’s have a look at solo thinking, then.

    Or rather, look away - solo thinking is by and large impossible where I work and, I suspect, where you work, too.

    We have an office with around ten people packed into around twenty square metres. It’s not a sweatshop, but it is an open shop. Generally, whenever I need to get any quality thinking done, there will be others gossiping, discussing the price of petrol, mortgages, coffee (in fact, they are always talking about the price of something). Then an empty skip lorry will race past the windows, chains flailing and rattling, simply making an almighty racket. It is simply not possible to concentrate.

    So I try to escape.

    I go off and make a coffee. Or I’ll go for a wander to the labs, but not actually get there. I’ll saunter nonchalantly down to a meeting room without any meeting planned or find an unoccupied office, sit down and open up the one thing that I want to be working on at that moment.

    I turn off Outlook I leave the phone on silent and on its charging station so that it doesn’t vibrate.

    I shut down my browsers.

    Then I load them up again (I nearly always need them for research and many of our documents are online). Browsers are a huge distraction or temptaion, but I find that if I’m really focussed I forget to browse the news or check my private emails or Google+.

    Sometimes I’ll uncap my fountain pen and start sketching, or I break open Excel and start working on tolerance calculations, O-ring compression sets; or I’ll start searching the internet for the one key factor that I need to define what I’m working on.

    In any case, I have to admit, my task-focussed multimedia shut-down strategy can be a tad one-sided. I often need to call somebody - a colleague or a supplier - for an insight on a particular detail. Or I’ll send them an email. And I disrupt their work in the process.

    So I can do detail - but, like drawings, I can admire detail but I don’t necessarily enjoy it; I can occasionally lose myself in detail work, but in the same way that I am slightly colourblind, I don’t see detail like a real clear-headed, focussed checker can. Working on the details of a design is often an effort that is sometimes too much and ends up in frustration. I’m not your stereotypical engineering bureaucrat who will gladly and with a sense of achievement check every last entry on a drawing or bill of materials. I’m not naturally a detail person; but others are and that’s why we should work together. It’s what teams are for.

    p.s.

    Jobs I could not do because of this:

    teacher marking exams (= engineer marking PPAPs) lawyer (=engineer getting involved in patents) customs administrator (= engineer and his paperwork (= engineer)) forensic scientist (= engineer working on quality complaints) politician (= engineer trying to get all parties on board)

    Oh. I am all of those.

    → 8:33 PM, Apr 3
  • Clearing the decks

    originally posted on one of my several now defunct blogs, called On Engineering on 2nd March 2012 I also note that the links to my “own” posts at engineerblogs.org are now defunct: a shame, and another reason for wanting to move to micro.blog and have things backed up properly, and via the Internet Archive

    I had a couple of hours today at work in what I can only describe as -for the workspace, at least - blissful calm with my laptop offloaded to IT for new software and general updates. It was positively therapeutic having my twin displays switched off, docking station bare and keyboard stowed away. All of a sudden, I could see my working environment for what it was - a mess.

    So I cleared the decks. Drawings that had lain scattered around in various states of checking or revision were brutally culled. Specifications for review that had sat there for months were shredded. Samples for testing were tidied away for the ever decreasing likelihood of them being required (the usual samples that arrive with no reference documentation or any indication of what they were for, let alone including a formal test request from someone who knew what they wanted). All of those things had become mere symbols, signs to my colleagues that I was - am - busy. Which is of course a far cry from doing anything useful.

    Finally I could see my desk again. That little expanse of off-white reflected my inner calm, the sun shone outside and my cup of coffee smelled great…

    Then I realised that I need to do some desk clearing with my blog-life, too. I started this blog with the intention of writing about those aspects of engineering that generally aren’t taught in uni. Things like PPAPs or finding things. As part of my background research to this blog, I came across a site called engineerblogs.org with some decent observations from a range of writers. They had a notification up saying that they were looking for writers, so I ended up submitting one or two posts, which became three and soon four. I’m still a guest blogger there, but I realise that I am neglecting my own blog.

    Hence the need to order my thoughts in terms of what ideas belong to this blog and what to EngineerBlogs. I can’t say I’ll have any answers straight away, but I don’t want to switch off the Canny Engineer just yet.

    → 9:11 PM, Mar 2
  • On an OK dough 'K'

    originally posted on one of my several now defunct blogs, called On Engineering on 29th Jan 2012

    It’s a while ago now, but over Christmas, I was (watch out, this is going to get exciting) doing the washing up after my sister had made the dinner. (Just to write “pork chops” does the meal an injustice, but that’s basically what it was). The item I cleaned last, because it had by then rather unappetising looking bits of wet pastry on it, was the beater from our old Kenwood mixer. As I washed, I remembered how this piece of utilitarian design had always fascinated me through its complexity and simplicity. So I took a photo, which I just rediscovered:

    It is designed as a ‘K’, instantly bringing the branding to the forefront. Whether or not this is optimal for mixing pastry I cannot say; but it works very well, generally resulting in great cakes, so its impact on the mixing dynamics of pastry is at least not negative. Its complexity is subtle, but everywhere present. It warps in all three dimensions, combining rigorous straight elements with beautiful curves, tubes with flat and developing blades.

     Some of the joins are no longer quite so beautiful on our example, but after over twenty years of use, that can be expected. Doubtlessly the assembly process has improved significantly since then (or has been made ever cheaper), but the K-Beater design remains to this day and, even when mass production using 3D printing becomes commonplace, it will remain in the future…

     A classic.

    → 9:01 PM, Jan 29
  • On finding things

    originally posted on one of my several now defunct blogs, called On Engineering on 24th Jan 2012

    At the very beginning of this blog, one of the questions I asked myself was - what do I do all day? Between cups of tea or coffee, lunch and hometime, I mean.

    (an aside; had I not settled down for a cup of coffee and a natter with a colleague recently, I wouldn’t have heard of silane coating systems, nor he of critical entanglement in polymers, so coffee breaks do have their uses).

    It’s a tricky question to answer, in reality. We have where I work a form of time tracking database where we record the hours that we spend on particular projects. I can’t extract my own time from each pot, though, so I’d be working on overall team trends, which would only loosely represent my own. Suffice to say I’m not that interested in setting up my own private tracker; for now we’ll say that I am pulled in so many different directions, it’s difficult to focus on any one activity for any length of quality time. One task that takes up an inordinate amount of time, one that I want to discuss here, is finding things.

    Others would refer to it as research, but sticking to the simpler term of “finding” better reflects what I am usually doing. Our labs have produced thousands of reports in the last years. Many are for repetitive tests for series production control or “requalification”, lots are investigations into manufacturing or quality issues. Many are validation reports that prove to a customer that we can deliver to an overly complex and largely unchallenging specification. We even have some that are reports on new developments (gasp!). At present, all that good data contained within those reports is effectively fossilised in the hardening goo of Microsoft Word.

    How do we detect patterns? Generally, we don’t. How do we find the totality of relevant test reports over the years for a given product? I won’t say that we can’t, but it’s harder than just tapping a query into a Desktop Search. We can search and filter through our test database, but that’s clunky, too (searching across several years? Forget about it).

    So the eternal problem remains eternal - how do we develop the relevant knowledge so that we can design better systems? That’s one of the key challenges I think we face.

    A small example: recently I had to answer some questions from a colleague in Korea regarding a hose connection. Simple, on the face of it. The connection requires a particular tube endform profile to ease insertion into the hose and to promote the fairly crucial function of sealing. Unfortunately, the drawing he was referring to was created in 2003 and I had no idea where the design principles that led to that particular drawing came from.

    I searched and found… another copy of his drawing, but not much else. So, it came down to the oldest of information transfer systems; colleagues. The colleague who created that endform drawing all those years ago still works in my department and ask him I did - but could he remember for the life of him how he arrived at those particular dimensions? No. What and where are our tools? Here’s a possible search pattern:

    1. Desktop search for anything containing that part number (emails, report, lists, etc) or description

    2. Search for reports containing that part number or description

    3. Search for reports containing similar parts of a similar description

    4. Search for relevant specifications

    5. Ask colleagues

    If nothing interesting is dredged up by that lot, then it’s a case of redesigning from scratch. How do we do that?

    1. Use Design of Experiments principles to get parts made at various diameters and angles and tested

    2. Request some Finite Element Analysis on the joint to ensure that nothing’s going to give too soon.

    3. Get some fluid dynamics studies done on it to ensure that I’m not overly constricting the joint

    4. … umm, file a report somewhere (go back to 1.)

    The answer here would then be to link the report to the drawing somehow, either simply by placing the report number somewhere on the drawing, or by placing the link somewhere in the CAD model, or as an additional link or document next to the print on our SharePoint system.

    Yes, we use SharePoint. It’s clunky and has some great functionality holes at the moment (we’re still on version 2007), which means that people don’t like to use it. However this, or a dedicated PLM system, is the way forward in terms of linking everything up.

    Curation of data is also important. Labelling, Keywording, making more searchable. It’s a dull and dry task to start or to perform retrospectively (company librarians required), but if it can become second nature to document control, it could help.

    Generally, putting data to use to obtain information, knowledge and even wisdom is becoming ever more important: Big Data is Big Business, as HP proved by stumping up $11.7 bn for Autonomy corp. In theory, using an Autonomy search would be better than hit and hope: hitting the Windows Desktop Search button and hoping that you’ve entered the right keywords, both in the search and in the document (and praying that the report was written in English by somebody who could spell). But I’ve only experienced Desktop Search and I don’t expect to see such enlightened methods used where I work.

    Another tack is social. I am a follower of blog, Confused of Calcutta, run by J.P. Rangaswami who now works for Salesforce.com. He is a true social enterprise and open market evangelist who loves the ways of communication available to us with Facebook, Google+ and their more enterprise-tuned cousins, Chatter and Yammer (an unfortunate name that sounds like the German for “to complain”). The idea behind these tools in the work place is an interesting one. Using my example above, I could write on my wall: “Wondering why this here endform pilot angle is 32° and the other 38° and whether or not it matters in the slightest”. The post will be seen by all of my followers and - the theory goes - one of them may know the answer, or know where to find it. Social beats email because the author of the question does not need to think about who a mail would need to be addressed to, who should be “carbon copied” on it and does not need to run the risk of either missing people out or annoying disinterested parties. Social beats email because the act of “liking” or responding to a post broadcasts that action to the connections of the person responding, potentially creating a snowball of knowledge.

    However, we all know the downside of our social networks - signal to noise. How can we ensure that the right people even read our question and hold it in their active mental buffers for long enough to action in amongst all the other questions, observations and general banter going on around them? This is where filtering comes in - but filter incorrectly and you’ll miss the occasional important missives in the storm.

    So, right now, I don’t know. In so many ways. If you’ve used Chatter, Yammer or tools like Autonomy, I’d love to hear from you. Has it transformed the way you work, the way you search, the way you engineer?

    In the meantime, I’d best get back to what we engineers do best, which is to… Umm, where did I file that job description?

    → 5:28 PM, Jan 24
  • On PPAPs

    originally posted on one of my several now defunct blogs, called On Engineering, on 15th Jan 2012

    There’s one thing that I need to get off my chest early on in this blog, as it has been weighing on my mind for some time. It is the bane of my automotive life thus far, the PPAP.

    Almost nobody knows or nor really cares these days that PPAP stands for “Production Parts Approval Process,” nor that the system is responsible for terabytes of redundant data. The idea behind PPAPs is - I admit - sound, in that each and every part in a car is fully validated before being built into a vehicle. However, having worked on tube fittings for a few years and come across PPAP submissions “weighing” 26 megabytes for threaded nuts weighing 13 grammes, and still being wrong, I feel that the PPAP process itself needs investigating.

    The PPAP is an information pack that includes the drawings, measurement data (with capability), test data, measurement and test equipment certification, FMEAs, control plans and process flows for the part in question. All of it needs to be correct to be accepted by the customer.

    It was created by the AIAG (Automotive Industry Action Group) that originally consisted of the “Big” (now “Detroit”) Three of Ford, GM and Chrysler. One of the driving principles behind it was to standardise the requirements for these three manufacturers so that their suppliers need only produce one data pack per part and not have to reproduce it with tweaks for each customer. The other key principle was and remains that parts will fit and work properly when they are assembled into the cars they are destined for. Equally, the PPAP should prove that the supplier is ready to produce good parts at volume and it provides baseline information for the life of the part (“back then at the beginning is was like that, now it’s like this”). However, the PPAP system has become a monster. This monster has generated its own sub-industry; dedicated employees who work only on generating or approving PPAPs, and people like me checking supplier submissions like school teachers marking essays. There are great chains of PPAP submissions, from sub sub suppliers to suppliers to the OEMs, and the whole system is populated by disinterested humans.

    The number of submissions that I have checked and rejected in the past for vast swathes of missing information was depressingly large. So these packs (of lies) are being batted back and forth from server to server, person to person and hours are being wasted the world over.

    My relative fortune in my secret PPAP life was that I only had to check the technical aspect of the submissions; somebody else in the quality department was responsible for going through all the other documents and certificates (generally “is there something here that looks like a certificate, or not?"). But even the technical side of things, which really should have been simple, always seemed to lead to confusion.

    Why should it be simple? Well, there are drawings with dimensions (some with complicated GD&T, admittedly, but still doeable) that need to be measured (and not just reported with the value “OK”). The supplier just needs to check off the dimensions one by one and show that what is being produced on real parts meets those requirements. Fine. But, the drawings also have words on them, sometimes in the form of specifications, which should give the supplier a small hint about some other aspect concerning the parts - like, say, performance - but often somehow don’t. Maybe it is actually all just too subtle and not simple at all. It was alas very rare in my experience to receive a PPAP and to be able to complete it there and then, on the spot, no more questions asked. So not only do the PPAP submissions need to be sent along the chain of suppliers, they need to be reworked and resubmitted. Usually this involves testing being (re-) started just as the parts are urgently required, which means that we have to grant deviations for a limited period until all of the testing is completed (corrosion testing, anybody?), we have to request deviations of our customers and we have to keep track of those deviations as well as the versions of those PPAP documents… and so on.

    So, there’s lots’s of time wasted, there are megabytes of redundant and duplicated information sloshing around, repeated for each of the approximately 15000 unique parts in each and every car driving around the world (I’m not sure PPAPs are used by Khodro in Iran, however).

    And yet, PPAPs, like audits, are impossible to argue against. I recall hearing that Continental tried to settle on a basic level of PPAP above which the customer would have to pay, but I think that rebellion was quickly crushed. If you’ve heard more about it, let me know in the comments.

    In the end, my recommendation is that, if you’re a young budding engineer looking for a first job, try to avoid anything that says “PPAP” on it. If you still want the job, then make sure that the PPAP bit is not too high a percentage of the job and is for a limited period only - and spend that time finding someone else who will do it for you (preferably not an engineer!).

    → 10:28 AM, Jan 15
  • On finding things

    originally posted on one of my several now defunct blogs, called On Engineering on 24th Jan 2012

    At the very beginning of this blog, one of the questions I asked myself was - what do I do all day? Between cups of tea or coffee, lunch and hometime, I mean.

    (an aside; had I not settled down for a cup of coffee and a natter with a colleague recently, I wouldn’t have heard of silane coating systems, nor he of critical entanglement in polymers, so coffee breaks do have their uses).

    It’s a tricky question to answer, in reality. We have where I work a form of time tracking database where we record the hours that we spend on particular projects. I can’t extract my own time from each pot, though, so I’d be working on overall team trends, which would only loosely represent my own. Suffice to say I’m not that interested in setting up my own private tracker; for now we’ll say that I am pulled in so many different directions, it’s difficult to focus on any one activity for any length of quality time. One task that takes up an inordinate amount of time, one that I want to discuss here, is finding things.

    Others would refer to it as research, but sticking to the simpler term of “finding” better reflects what I am usually doing. Our labs have produced thousands of reports in the last years. Many are for repetitive tests for series production control or “requalification”, lots are investigations into manufacturing or quality issues. Many are validation reports that prove to a customer that we can deliver to an overly complex and largely unchallenging specification. We even have some that are reports on new developments (gasp!). At present, all that good data contained within those reports is effectively fossilised in the hardening goo of Microsoft Word.

    How do we detect patterns? Generally, we don’t. How do we find the totality of relevant test reports over the years for a given product? I won’t say that we can’t, but it’s harder than just tapping a query into a Desktop Search. We can search and filter through our test database, but that’s clunky, too (searching across several years? Forget about it).

    So the eternal problem remains eternal - how do we develop the relevant knowledge so that we can design better systems? That’s one of the key challenges I think we face.

    A small example: recently I had to answer some questions from a colleague in Korea regarding a hose connection. Simple, on the face of it. The connection requires a particular tube endform profile to ease insertion into the hose and to promote the fairly crucial function of sealing. Unfortunately, the drawing he was referring to was created in 2003 and I had no idea where the design principles that led to that particular drawing came from.

    I searched and found… another copy of his drawing, but not much else. So, it came down to the oldest of information transfer systems; colleagues. The colleague who created that endform drawing all those years ago still works in my department and ask him I did - but could he remember for the life of him how he arrived at those particular dimensions? No. What and where are our tools? Here’s a possible search pattern:

    1. Desktop search for anything containing that part number (emails, report, lists, etc) or description

    2. Search for reports containing that part number or description

    3. Search for reports containing similar parts of a similar description

    4. Search for relevant specifications

    5. Ask colleagues

    If nothing interesting is dredged up by that lot, then it’s a case of redesigning from scratch. How do we do that?

    1. Use Design of Experiments principles to get parts made at various diameters and angles and tested

    2. Request some Finite Element Analysis on the joint to ensure that nothing’s going to give too soon.

    3. Get some fluid dynamics studies done on it to ensure that I’m not overly constricting the joint

    4. … umm, file a report somewhere (go back to 1.)

    The answer here would then be to link the report to the drawing somehow, either simply by placing the report number somewhere on the drawing, or by placing the link somewhere in the CAD model, or as an additional link or document next to the print on our SharePoint system.

    Yes, we use SharePoint. It’s clunky and has some great functionality holes at the moment (we’re still on version 2007), which means that people don’t like to use it. However this, or a dedicated PLM system, is the way forward in terms of linking everything up.

    Curation of data is also important. Labelling, Keywording, making more searchable. It’s a dull and dry task to start or to perform retrospectively (company librarians required), but if it can become second nature to document control, it could help.

    Generally, putting data to use to obtain information, knowledge and even wisdom is becoming ever more important: Big Data is Big Business, as HP proved by stumping up $11.7 bn for Autonomy corp. In theory, using an Autonomy search would be better than hit and hope: hitting the Windows Desktop Search button and hoping that you’ve entered the right keywords, both in the search and in the document (and praying that the report was written in English by somebody who could spell). But I’ve only experienced Desktop Search and I don’t expect to see such enlightened methods used where I work.

    Another tack is social. I am a follower of blog, Confused of Calcutta, run by J.P. Rangaswami who now works for Salesforce.com. He is a true social enterprise and open market evangelist who loves the ways of communication available to us with Facebook, Google+ and their more enterprise-tuned cousins, Chatter and Yammer (an unfortunate name that sounds like the German for “to complain”). The idea behind these tools in the work place is an interesting one. Using my example above, I could write on my wall: “Wondering why this here endform pilot angle is 32° and the other 38° and whether or not it matters in the slightest”. The post will be seen by all of my followers and - the theory goes - one of them may know the answer, or know where to find it. Social beats email because the author of the question does not need to think about who a mail would need to be addressed to, who should be “carbon copied” on it and does not need to run the risk of either missing people out or annoying disinterested parties. Social beats email because the act of “liking” or responding to a post broadcasts that action to the connections of the person responding, potentially creating a snowball of knowledge.

    However, we all know the downside of our social networks - signal to noise. How can we ensure that the right people even read our question and hold it in their active mental buffers for long enough to action in amongst all the other questions, observations and general banter going on around them? This is where filtering comes in - but filter incorrectly and you’ll miss the occasional important missives in the storm.

    So, right now, I don’t know. In so many ways. If you’ve used Chatter, Yammer or tools like Autonomy, I’d love to hear from you. Has it transformed the way you work, the way you search, the way you engineer?

    In the meantime, I’d best get back to what we engineers do best, which is to… Umm, where did I file that job description?

    → 5:25 PM, Jan 12
  • This one's on me

    originally posted on one of my several now defunct blogs, called On Engineering on 12th Jan 2012

    The glance that resulted in the (low energy) lightbulb switching on in my head that in turn resulted in this blog was towards a book lying sideways on top of Dad’s collection of Marcel Proust’s recently translated In Remembrance of Things Past. The title (unlike that previous sentence) was pithy and the text large: “On Music,” it was Alfred Brendel’s collection of essays from his career as a pianist: about the music, its playing and interpretation, about selecting the right piano for the right piece and the like.

    It set me thinking On Engineering. What sort of essays (blog posts, of course, these days) would that entail? Brendel’s writings range from the highly technical (“I note with regret that in bar 73 of A major II [Schubert] softened the staggering G major chord by turning it into a G sharp appoggiatura” is highlighted in the Guardian review) to the anecdotal, but he stays focussed on the subject of music. I had the notion (I may still be wrong and my Google searches insufficient) that there is little out there that seeks to describe in a similar way what it is to be an engineer. Whilst I am no Brendel of the engineering world (or Brunel, Tesla, or Tupolev) my blog should focus on the career of someone working as a hardware engineer.

    So what is my engineering experience? What stories do I have up my sleeves to share with you? Why would you want to listen to me? Well, I have worked on climate control systems, tubing, boxes and packaging in my time. I have made carbon fibre wing sections and I have a patent to my name, plus a couple that didn’t quite make it. I have used many of the tools available to the engineer (Word, Excel and Outlook being the main ones, followed closely, I regret, by PowerPoint). I have travelled extensively. I have interfaced with chief engineers and interior designers as well as supplier shop-floor operators. I have used project management tools and I have been involved in most aspects of creating, producing and selling product. How much of all of this was taught to me during my undergraduate courses? Almost none of it. How much, conversely, have I used of my degree? Almost none of it. So can I still call myself an engineer if my studies and certificate appear to have been largely irrelevant? I believe so, but that’s one of the concepts we’ll explore in this blog.

    There’s plenty of material there for me and, I rather hope, for you through your comments and suggestions, to put together a good picture of what being an engineer actually entails. With that picture painted, we can compare it with what being an engineer should mean and plot a corrective course if necessary.

    → 10:21 AM, Jan 12
  • On Engineering

    this was originally posted to my defunct blog On Engineering on 7th Jan 2012

    The number of technical books within the field of engineering is far greater than I can count (Google gives a number of 5 million results for the search phrase “engineering books”). Yet few are the books or even blogs about what engineers actually do. I mean on a day to day basis. Yes, we solve all the world’s problems, we turn ideas into reality, we make things and their processes more efficient, cheaper - we optimise - but what do we actually do? All day?

    I have been in engineering since completing my studies and degree in Aerospace engineering at Bristol University in the 1990s. Whilst I could never claim to be a top engineer, I am good enough and thoughtful enough to be able to write about it from the perspective of an experienced practitioner. So I thought I would give it a go.

    This blog is also a little test to see if I’ve been paying attention all those years…

    → 10:13 AM, Jan 7
  • RSS
  • JSON Feed
  • Micro.blog