logo design...copyright Elena Yatzeck, 2010-2015

Thursday, May 16, 2013

Agile Without Social Engineering

In 2006, Ivor Jacobson famously summarized, "Most important, agile is about social engineering."  And indeed one of the things that makes so many agilists so darned loveable is that we are, as a friend of mine put it yesterday, "the kind of people who want to create a work place where you can go and still be a human being."  Not a "resource," not an "FTE," but a human!  It's an inspiring dream!
http://www.pbpp.state.pa.us/portal/server.pt/community/human_resources/5364/how_to_apply/494613
But you know what?  It's not a goal you can attack directly, even if you are, for some reason, under the impression that you're in charge.  In fact, in my view, a lot of us are completely wrong about what the "lead" and "lag" measures are for a successful "agile" transformation of an organization.

"Lead" measures, you will recall, are the little things you can observe which reliably indicate that something bigger and better is about to follow.  So for example, "open team communication" is a lead measure for agile, if you can measure it, whereas "85% of the company's teams are executing with scrum" is a lag measure.  A good agile coach will seek a company offering good open team communication, happy to think that they will be able to easily build a large number of scrummers to start scrumming everywhere you look, and lots of colorful post-its.

But wait, really?  What is the means, and what is the end?  Are we insisting on a certain kind of culture so we can impose a specific set of processes?  Does that even make sense?  Is that what the Agile Founding Fathers meant when they said they valued "individuals and interactions" over "processes and tools"?

We pride ourselves on thinking that culture change is needed before we can cultivate agile.  How many times have you heard, "I'm not going to get anywhere with these people.  They're all command and control mainframe people" or "they have a team of 5 people in 4 time zones" or "they only want to change their requirements process, and not touch engineering," or...or...or.  We sit in judgement and say "you know what, you people, you don't have a culture of trust and communication, so you're pretty much doomed.  I don't see agile working here."  Some of us turn down the work from these "losers" point blank, while others of us sneer silently to ourselves while we bite the bullet and we cash the paychecks.

We would be better off thinking that successful software delivery is the best breeding ground for culture change.  Let's get down to brass tacks.  In an economy where technical people can move fairly fluidly from one company to another, a software development team is not "Camelot."  It is more like a giant polygamous marriage of convenience.
  • The company needs something done
  • The team members want to contribute towards the companies goals and get stuff in return like money, experience, and recognition.
Coaching the team is going to consist of finding the points where the team members' self interest overlaps with the company's goals.  As a coach, you will provide some helpful scaffolding which can support employer and employees as they pursue shared goals following some modified processes.  Once they're used to the new techniques, you go away.  Here's my advice to you, and this is coming from a person who really does prefer to work with humans, not FTEs:

Go with a few classic delivery success metrics, and let culture take care of itself.  Ironically, if you want to make a difference to people's lives, you should help them:
  • Get some actual metrics on their current corporate and team performance using agreed upon objective measures.  A shockingly small number of companies measure this stuff.  What is your actual speed to market today?  How are you measuring code quality?  What is your total cost of ownership?  Total cost of quality?  What are your company's goals?  What is your scorecard?  Or even "how much do you measure, and how much do you reveal about the results?"
  • Analyze what the pain points are and pull from your agile bag of tricks to suggest lowest effort/highest payoff techniques to try.
  • Prove success by comparing "before" and "after" business metrics.  
  • Repeat.  Plan build check act.  Etc.

You need to be a very good reader of a culture to figure out what the appropriate fix is going to be for the problems you see.  But it is seldom a good idea to share those observations with your subjects.  Luckily, in general, a business case can be made for almost any "social engineering" you want to do, so there is no need to get all pious with your coach-ees.  

Do you feel people should pick up the phone and just talk to each other, instead of sending passive aggressive little emails?  Ask teams to do a pilot where they try discussing things in person on a non-intrusive schedule, and suggest they impose a 3-email limit on all exchanges, after which the participants call a meeting to discuss it in real timeAsk them to stop and consider the results as a group.

People don't even have to like each other to get better results from practices that agile has made customary.  And, on the other hand, proximity, whether physical or virtual, creates a sense of team unity that cannot be gained any other way, particularly if the team is succeeding.  (Or better yet, failing spectacularly and then succeeding due to your helpful process tips.
 
In real life, most adults don't want to be lectured about how to get along with each other.  Most adults prefer to know an actual corporate goal and be given the right tools to contribute towards the goal, along with appropriate reimbursement.  People may even settle for inappropriate reimbursement so long as they choose the trade-off, and their immediate coworkers recognize their contribution.

As a coach, you have a very serious choice to make.  Are you going to lecture people constantly about how they will behave when they are agile?  Or are you going to help them achieve actual goals which contribute to the actual company mission, vision and values?  And let them figure out how they want to get along.  The choice is yours.

http://commons.wikimedia.org/wiki/File:Richard_Burton_and_Julie_Andrews_Camelot.JPG
 

Sunday, May 12, 2013

For Fellow Testing Novices: Some Basics For Provisioning A New Agile Testing Practice

***DANGER LONG POST, SORRY!***

Agilists are fond of catchy acronyms like INVEST (stories should be independent, negotiable, valuable, estimatable, small, and testable), SMART (acceptance criteria should be specific, measurable, attainable, relevant, and time-bound), and especially KISS (keep it simple, stupid).

I've spent some time in the last months trying to get my arms around automated agile testing practices, and in the end, I have to admit that despite the "why do you even need testers" mantra you sometimes hear, testing cannot be "kept simple" for an agile project of any complexity.  You really need to think it through very carefully, and there are a lot of moving pieces that don't reduce into a nice 4x3 matrix. 

Not to destroy the suspense, but by the end of the post, I am going to advocate that you hire an experienced, full time person to set up your quality strategy and lead its execution.  But here is a quick Sparc Note presentation of the dimensions you should keep in mind as you bring in the right people to help. Test architecture is the Ginger Rogers of the agile team.  You may recall that Ginger "did everything Fred Astaire did, only backwards and in heels."  You should think of testers as doing everything developers do, only in reverse and requiring significant "golden data" setup.

From http://www.fanpop.com/clubs/ginger-rogers/images/14574687/title/ginger-rogers-fred-astaire-photo

General Things To Know About The Role of Quality In Your Agile Team

1.  You may want to evaluate ROI for "good quality strategy setup" in terms of what you spend versus cost of poor quality (COPQ).  If you search the internet, you will find that COPQ can only be explained through use of a diagram which includes an iceberg.  I think it's a "Six Sigma" thing.  Here's the nicest one I found.

By Javier Rubio, posted on http://elsmar.com/Forums/showthread.php?t=17616
Even though I think the mandatory iceberg thing is funny, there is a real truth here.  There are a myriad of ways poor quality can cut into your profits and grow your losses.  You need to understand which of those are high risk for you, and you need to build a quality strategy to address them.
2.  Cost of poor quality matters to your bottom line.  As I ranted about in my earlier post about technical debt, clean code allows for fast time to market, fastest enhancements as needed over the life of the product, and lowest total cost of ownership.  If "clean code" includes making sure that you evaluate your intrinsic code quality along with minimizing functional defects, you can save 25% of your annual maintenance costs, and 18% of your total costs (capital plus operational).
 

3.  Testing is a vocation.  There's a lot of talk out there about "everyone should do everything on an agile team."  And that's good.  But even though most people I know COULD do everything, they don't WANT to.  It's not just about "ability," it's about passion.  You need people on your team who are passionate about getting the quality right.  Those people will develop a distinct set of skills, some of which are innate, some which are learned, and some which are learned over time through hard experience.  There are a lot more moving parts out there on your test team than you will ever understand.  Do not underestimate the difference you can make by hiring people who WANT to enforce good testing practices on your teams.

4. Most professional testers have experience with waterfall practices, not agile.  A good waterfall tester will be a good agile tester--hang onto them.  But you need to spend some time explaining how agile teams do things differently than waterfall teams, to avoid making them very nervous.  Good testers care passionately about quality, and they don't want you getting all giddy, but force them to miss something, which is what they will think if you don't explain yourself.  You should bring in a specifically agile testing coach to clarify agile testing practices, and to explain the basic vocabulary around things like "unit test," "component test," "story test," and so on.

5.  You must automate more then one kind of test for agile, in most cases, and using a different automation strategy than you used before.  An agilist is likely to think "automated testing" means "automated unit tests."  A waterfall person is likely to think it means "automated functional regression tests."  A pragmatic person will look at the whole picture and feel an urgent need to hire a test architect who will explain it all and enforce some standards.  This is because:

6.  Your automated testing code base should be architected as carefully (or more) than your functional code base.  When you read about agile testing on the internet, you will find a lot of rhetoric about "team spirit," and how talking with testers helps developers get done faster, because they understand what they're doing.  This is true, but it is not enough!  Your automated tests are likely to be around a lot longer than your hot-shot, chatty coders.  If your organization has a separate team for software maintenance than for projects, you should be thinking about the quality of your automated test bed, not solely how much the team talked together during development.

And seriously, even if your coders do a combination of development and maintenance, are you expecting them stay with you for the next 25 years or so, and remember vividly what they talked about?  Some corporate software is around that long, or longer.

So what should your agile test architecture look like?    Think of this as a mapping from the classic "V-Model" for Quality, which most of the world's testers know and understand, to the "Agile Test Quadrants," which covers the same ground, but from a different perspective.

The V-Model

The V-Model is the one most modern software testers are familiar with.  So if you are working with a team that is moving from waterfall to agile software development, you need to be aware of the V-model, and you need to be able to discuss agile testing in the context of what people already know.  The V-model says that software development occurs in discreet phases, and that testers should be verifying that nothing bad happened at any phase.

So what are the key roles and responsibilities for the V-model?
  • Business Analysts: Record requirements, build models of how the system should work in the future, get business users to agree the team is on the right track.
  • Systems Analysts:  Translate business requirements into system requirements.  The system requirements are the basis of future "systems testing."
  • Developers:  In the V-model, developers may be encouraged to implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level).  In many environments, special technicians are also brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like. Developers will typically create some test data to prove out their code, but the system will not be shipped to testing with that data available.
  • Technical/System/Quality Assurance Testers:  these testers, generally employed by the information technology organization, perform "system" tests.  This involves building out a rigorous set of test data (static data, transactional data, and combinations of input data that will exercise all systems options), defining system test scripts, and running them.  The system tests sit somewhere between the component tests the developers build and the user acceptance tests built by the user acceptance people.
  • User Acceptance Testers/Super Users:  UAT testers, sometimes consisting of actual operational people brought in at the end of a development cycle to do some manual testing, and sometimes including professional testers, do a thing called "user acceptance tests." This involves actually using the system in a controlled environment.  In a well-functioning organization, UAT should be a short-lived period of manual "exploratory testing," where business people ensure that their highest value needs are met. This usually happens after everything else, so if users don't want to accept what they see, they have to wait a while to get things fixed.
  • Operations:  in the V-model, operations people do "validation reporting," to sho that the system is working on an ongoing basis.  To do this they typically get some kind of automatically generated report every day that lets them look for variance in standard metrics like "number of widgets shipped."
  • Automated testers:  the V-model doesn't say so explicitly, but "automated testers" in waterfall are people who build a set of end-to-end functional tests after the software is released to production, while people still remember how it was supposed to work.  These end-to-end functional tests are also called "regression tests."
Okay, so far so good.  (You're okay if you just skimmed.)  In comparison, agile uses something called the "testing quadrants" which cover similar ground, but in a different shape.

The Agile Testing Quadrants
From http://qualitybakedin.blogspot.com/2012/03/agile-testing-quadrants.html, citing Brian Marick, Lisa Crispin, and Janet Gregory
Most agile testers make reference to Brian Marick's "agile testing quadrants," when they describe the theory behind their practices.  Reassuringly, there is a lot of commonality between the V-model and the quadrants.  Most activities are the same (with some notable exceptions), but the timing is different.

Here are some standard(ish) agile testing roles and responsibilities:
  • Business and Systems Analysts: build written models of how the system should work in the future.  Build ongoing communications channels from business people to the rest of the team.  And, most importantly from a testing perspective, ensure "stories" are created which divide the system up into testable vertical slices of functionality that can be tested functionally without the need for intervening and separate "systems" tests.  Each story has "acceptance criteria" which fit into a larger narrative around potential uses within scenarios.  This is key:  agile stories marry systems requirements to functional requirements, and enable the team to do functional testing to ensure system behavior. 
  • Everyone together: before beginning development on any particular story, and as needed while the story is developed, discuss how the story acceptance criteria should be tested from a functional perspective, and from an architectural perspective, so everyone is clear on their role.
    • Determine how to test any GUI widgets--have dev build an automated unit test using a tool like Jasmine?  Have testers manually verify?  Build an automated test using a UI-friendly tool like Selenium or Twist?  
    • Determine how to test the system's functional behavior as laid out in the acceptence criteria, and what additional edge testing or negative testing might need to be done in addition to what is specified explicitly in the written story.  Which tests should the team drive through the user interface through a combination of manual and automated functional tests?  Which criteria should the team test automatically at the service level or below, using a tool like Fit, FitNesse, Concordian, or JBehav?
    • Consider system impact of tests on the overall test architecture.  How will new tests affect the existing UI-driven test scenario library?  The service-driven library?  The integration test bed used at check-in?  The automated functional/system test set run continuously or daily?
  • Developers:  Through TDD, developers implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level).  The ones that are prone to changing or complex to test at the unit or component level have automated tests created which are run at every check-in.  These tests are the ones identified in quadrants 1 and 4.  Not coincidentally, the matrix calls these tests "Technology Facing."  Additionally, if any instrumentation is needed to support automated tests being scripted by the testers, developers build those fixtures.  In many environments, special technicians are still brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like.  Developers generally handle issues around environments by automating the deployment process as much as possible using a continuous integration tool like Jenkins, Go, Cruise Control, or HP ALM. 
  • Technical/System/Quality Assurance Testers plus User Acceptance Testers/Super Users:  For a new agile team, we often create a small team of experienced testers from the combined IT and business organizations.  This group focuses on the business-facing user acceptance tests, shown in quadrants 2 and 3.  They build the functional test scripts needed for UI-driven scenarios and system-driven scenarios, supported by tooling done in the development group. Automation is used on scenarios with an eye to plugging local story tests into the end to end test bed run nightly or continuously.  GUI-based testing is done manually, with automations created for functions that are likely to change.  Optimally, the automations cover each "acceptance criterion" in each story, to ensure that the system does everything the business wants it to do.  Story-base tests in agile are called "functional tests," "story tests," "functional acceptance tests," or "examples," depending on who you are reading.
So roles collapse, and everyone needs to work together, and a bunch of things need to be automated, and it pretty much has to happen in real time.  Also, there are a couple of big pieces missing when you do the mapping.

Things the V-model leaves out:
  • Usability and Alpha/Beta testing.  Especially as agile development has incorporated the concept of "continuous delivery," it has become possible to simultaneously deploy multiple versions of the same production code base to different audiences to test usability.  Metrics on the screens capture efficiency, where people navigate, etc.  The ability to test the software and fine-tune its behavior through variables even after deployment provides a strong contrast to what is available in waterfall, where even basic misunderstandings of functionality may not come to light until the user acceptance period, and by then, it may be too late to change the software at a reasonable cost.
I'm not grinding that particular axe right now.  Just saying.  (You're still okay if you were just skimming.  That's the wonder of bullets and bolding!)

Things the Quadrant leaves out:
  • Ongoing operational testing (validation reporting).  Quite a lot of the agile literature assumes that you are working in an industry constantly in motion, maybe delivering web software, and working on a code base which is constantly in motion, so operational reporting won't be an ongoing need.  However, some agile shops will need this type of reporting, so I'm thinking it would be good to add it to the quadrant, perhaps as a deliverable, if not as a type of test.
  • Pre-deployment "system testing" and post deployment "regression testing."  As detailed in the roles and responsibilities section above, system testing is combined with regression testing in a planned way, from a business perspective, and developers retain responsibility for intrinsic code quality at the unit and component level, as well as providing tooling for tests, mocks, stubs, along with virtualizations needed for incomplete end-to-end scenarios.  Testers are building out pieces of end to end regression tests from the beginning.  The regression set should be planned ahead at a high level, and evolved in detail through every iteration, with as much care as that of the code base to be tested.
How much should you automate?  What tools should you use?  And how much will that cost?  Just another couple of diagrams and we'll be there.

How Much Should You Automate?

One common misconception people have is that agile teams are required to automate everything, to reach 100% code coverage as well as 100% acceptance criteria coverage of 100% of the possible test scenarios at all architecture levels.  There are four important ways in which this is not true.


1.  Are you kidding me?  It isn't possible to do this, even if you wanted to.  Good agile testers utilize the same rules about risk-based testing, test case matrices to control test permutations, and common sense that they used when they did waterfall system testing.  And after an agile team determines what needs to be tested and what doesn't, it chooses an even smaller subset to automate.  But how?

2.  The Test Pyramid.  Mike Cohn developed the concept of the "Test Automation Pyramid," and Martin Fowler has used it recently in his bliki posts about automated testing:
From http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
The pyramid represents the relative amount of automation teams should build at each level of the architecture.  The pyramid's main point is that teams should generate a lot more unit tests through test driven design (TDD) than they do service-level tests or tests driven through the UI.  This is a cool and helpful concept, and powerfully illustrates the difference in the way teams tackle automation on an agile project than they typically do on a waterfall one.

As Fowler says, the automated tests coming out of a waterfall project typically have few unit-level tests, and probably have too many functional tests driven through the UI, making them look more like an ice-cream cone.

3.  The difference between code coverage and acceptance test coverage (Test Driven Design and Acceptance Test Driven Design): One thing the pyramid doesn't quite represent, as Fowler says, is that you may well use a tool like Jasmine to develop "unit tests" at the UI level, or a tool like ITKO-Lisa to develop "unit tests" at the service level.  Meanwhile, you may have behavior specified in a story's acceptance criteria that can only be tested by interrogating behavior that runs through all three layers.  Test architects and agile teams need to explicitly consider both dimensions:  what is the minimum set of unit and component tests we need to exercise every important line of code, but equally what tests are needed to regression test every acceptance criterion?  And then, of course, what is the smallest and least complex amount of code needed to cause both kinds of tests to pass?  This is a gnarly question and it has to be solved both at the macro level and case by case.  There is no "KISS" rule to send you hopping quickly through it.

4.  The Steel Thread.  A final useful concept for automation which I have seen referenced by Lisa Crispin, is the concept of the "steel thread."  As a team, you want your automated tests to be delivered along with the code, not trailing after it.  This means you need to make choices.  Part of the discussion around every story is "what is the minimum automation we need to test this acceptance criterion."  You don't need to automate every edge case, every negative case, and every possible data permutation.  Exercise the code more thoroughly than you automate it, but leave enough automated so that if someone changes the code in a way that breaks it, you will know, and you can go back to check.

So Where Does that Leave Us?
 
Proper testing turns out to include a number of different dimensions which don't all line up to each other:
  • Architectural layer you are testing:  UI, Service, Database
  • When and how frequently you should apply the test:  Test Before Check-in, Test Before Deployment, Test when Environments are Available
  • Type of requirement:  functional or non-functional, specialized or general
  • Type of testing:  Unit, Integration, Story Point to Point, System End to End
  • Automation types:  manual, manually-triggered, CI-triggered, continuously triggered, nightly
Seriously, if you want to know about this with any depth at all, or you want to be an actual practitioner, you need to read some books by people who have actually experienced this: Lisa Crispin, Janet Gregory, Elizabeth Hendrickson, Mike Cohn, Ward Cunningham, Brian Marick and others have a lot of great stuff out there.

But in the mean time, if you're trying to ramp up an appropriate testing strategy for your newly-converting-to-agile program in a hurry, you really need to bring in people to help you who know agile, who know testing, and who know agile testing, and you need to keep your entire team focused on quality while they ramp up needed new skills.  This is probably a much bigger deal than you realized.

Saturday, May 11, 2013

How to Build Your Brand While Still Getting Things Done

Many of us consciously or unconsciously align ourselves in one way or another to the belief that if we work diligently, we will succeed in business. As Andrew Carnegie put it classically in 1903:
Do not make riches, but usefulness, your first aim; and let your chief pride be that your daily occupation is in the line of progress and development; that your work, in whatever capacity it may be, is useful work, honestly conducted, and as such ennobling to your life. (How to Succeed in Life, by Andrew Carnegie)
Those alignments come in different flavors:

"The Naive"
Some of us walk through life wearing a philosophical button (or, more scary, a physical one) that says:
Silly fools!  If we write really high quality code, and there is no-one in senior management there to properly interpret our code review, what have we accomplished?  (Apparently there is a comedy skit by the Royal Canadian Air Farce where they ask the question "If a tree falls in the forest, and no-one is there to hear it, where are they?" but I can't find a YouTube clip of it, so you will just have to imagine it for yourself.)

In fact, most of us notice eventually that "getting things done" does not directly translate into moving up the corporate ladder.  We are so busy pulling all-nighters to "fix the build" or "write the manual" or "solve the architectural problem" that we don't end up rubbing shoulders with the influential people who can vouch for us at promotion time.

 "The Cynical"
This comes in two forms:  people who get credit, and people who don't get credit and cry about it.  Some of us decide that since the straightest path to success is to get credit for things, we will make sure to get credit a lot.  So we will focus our energy on building our brand, and we will devote all waking hours, words, emails and deeds ensuring that as many things as possible are credited to us by the people who matter.  This can be a combination of exaggerations, fabrications, and appropriations of others' work, but overt distortions of fact should be limited, so that we don't get caught (very often).  Appearance is reality at bonus time!  So we shift gears.  Our new mantra is:
Meanwhile, others of us are not getting credit, and we are proud of it, except at annual review time.  How many nights have we spent sobbing into our beer with colleagues over how unfair life is, and how did that jerk get promoted, and "oh, she only manages up," and so on?  And then we move onto "I wouldn't want to be like that anyway," and we get distracted by something, and then another interesting problem comes up to solve, so we forget about it again until next year.  And it's a good thing, too, because our accomplishments are the collateral everyone is fighting to take credit for!   The story might end here, and it does with a lot of us in one cynical group or the other.

"The Phenomenon"
Some of us move past naive and also past cynical (to what you might call "Post-Cynical"), and ask the question not just sarcastically, but also in earnest:  is it possible to be successful without selling out?  If our passion is to do stuff, can we keep doing stuff, or do we have to spend all our time hanging out with important people connecting the dots that lead us to a lucrative career in consulting or an office with a window? Or at least a door?  Here we stop to ponder some people who seem to have it all, like Martin Fowler, Jim Highsmith, or Mary Poppendieck.  Those are people who do stuff and are also known for doing stuff.  They are real, and they are also famous.  Executives love them and so do disenfranchised smart people.  What does it take to be a genius who is also famous?
I don't actually know the answer to this.  If they tell you, let me know!  Meanwhile, let's power on.

"The Personal Brand Manager"
This is my strategy for the rest of us.  In real corporate trenches, we need to be aware of the choices we make with our time every day.  There are two parts to this.
1.  Don't be grandiose.  If you want to be famous, but you aren't, then figure out what you can do to make the most of what you do well.  I'm a big fan of this Steven R. Covey concept, the "Circle of Concern" and the "Circle of Influence."  Make sure you make a distinction in your mind each day between things you can do, and things you can't do.  Work on the ones under your control.  Do not focus on the things that are not under your control.  Work to build your influence.  Otherwise you will be angry all the time.

2.  Be wise in controlling the message as well as the results around your efforts.  Understand that for those of us who are not Fowler, Highsmith, or Poppendieck (and who knows, maybe for them too!), every action includes a measurable accomplishment of some kind and a story about it.  If you are mindful, you can decide and influence what happens and who gets credit, and the obvious answer isn't always for you to do both.
We should think about what story we want to see around the things we are making happen.  Those of us who are just average people galumphing along, should go ahead and make things happen, if that is our passion, but we should be putting effort into all three areas:
  • We should ensure that we do not get credit all the time.  Certainly, to some degree we do this so that people will know we know how to play the game, but we also do it because it builds good will on a solid foundation.  If we are consultants, we want our client to get credit for the stuff we are coaching them to do, and we are willing to accept their anxiety and anger around changes they are making.  We build our career by building their careers.  If we are corporate employees, we want to build up our staff and our colleagues by being quick to credit others and slow to credit ourselves.  And of course we want to credit our boss!
  • We should ensure that we do get credit some of the time.  Don't be the tree falling in the forest.  Nobody promotes a dead tree.  Make sure you know who needs to know what you are doing, and make sure they know about it.  You can even be a little Machiavellian about this:  know that when you send out a memo to your whole department praising your team, you are praising yourself.  But that's not a bad thing--it's one of those nice cases where it is a win for everyone.  If nobody knows what you're doing, you have no-one to blame but yourself.
  • We should ensure that we "manage up" sometimes, even when we don't have a specific accomplishment to brag about.  We need to be out in the world doing something besides just what our job calls for.  If we want to grow and have more influence, we need to understand that this type of networking will take time, and we need to take the time to do it.  We should not wait for the go-live or the big sale to set up meetings, to request a mentor, or to join a networking group for our fellow women, racial group, sexual orientation group, special technical interest group, or neighborhood volunteering group.  ("Former players of ASCII dungeons and dragons games, unite!")
Is that the same as "taking credit for things we didn't do"?  It doesn't have to be.  But if it is, it should be because we are relentless corporate go-getters who are willing to move ahead at any cost!  How's that for a rousing take-away?!

The point is that we have the responsibility to be our own advocate, and we need to understand what we want, and we need to work proactively to do things and to have whatever the right level of influence is, in order to keep doing things at the scale appropriate for us.  So go get it, guys!  Meanwhile me, I'm going to spend the rest of the day trying to track down that stupid online Canadian comedy sketch.