Bristle Software Agile Tips

This page is offered as a service of Bristle Software, Inc.  New tips are sent to an associated mailing list when they are posted here.  Please send comments, corrections, any tips you'd like to contribute, or requests to be added to the mailing list, to tips@bristle.com.

Table of Contents:
  1. The "Waterfall" model of software development
    1. Problem 1:  Requirements not clear
    2. Problem 2:  Requirements not implemented
    3. Problem 3:  Requirements missing
    4. Problem 4:  Requirements not accurate
    5. Problem 5:  Requirements not feasible
    6. Problem 6:  Requirements became obsolete
    7. Problem 7:  Software projects are hard to schedule
    8. Waterfall horror story
    9. The birth of Agile
  2. Agile software development
    1. The spirit of Agile
    2. Agile is like riding a bike
    3. Don't be "rigidly Agile"
      1. "Rigidly Agile" horror story
    4. First indicator of Agile success
    5. "Agile Contract"
    6. Agile success stories
    7. Agile tool support
    8. Beware WaterScrum
    9. Self-organizing team -- no PM, no status reports
    10. No status meetings
    11. No fixed release schedule -- no poles in the ocean
    12. Agile links
  3. Software Quality
    1. Structured Logging
Details of Tips:
  1. The "Waterfall" model of software development

    Original Version: 3/31/1983
    Last Updated: 5/6/2020

    In the "Waterfall" model of software development, you write out all of the requirements for a program first.  Then do the complete top level design.  Then the complete detailed design.  Then write all the code.  Then write and run all the tests and have them all pass (wouldn't that be nice!!!).  Then deliver the code on schedule.

    This NEVER worked. Things always went wrong.  See below for reasons and examples.

    --Fred

    1. Problem 1:  Requirements not clear

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      Often, the requirements doc was not clear enough.  People disagreed later about which parts were actually requirements.  Versus suggestions, explanations, examples, etc. 

      Solution:
      The software design "methodologists" worked with the lawyers and established conventions about "shall" vs "should", "will", "would", "may", "might", etc.  Only sentences containing the word "shall" were requirements.

      --Fred

    2. Problem 2:  Requirements not implemented

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      Some of the requirements slipped through the cracks.  Never designed, never implemented or never tested.

      Solution:
      We toolsmiths wrote elaborate CASE (Computer Aided Software Engineering) tools.  Yes, I too was complicit.  I was young and stupid, and just did what I was told.  Didn't always think or dare to ask if there was a better way.  (For large projects anyhow.  But for my own small tools projects, I always did what we now call Agile.)

      One such tool (TRACE) managed a "traceability matrix".  The matrix mapped all the "shalls" to the names of the code functions that implemented them.  And to the paragraph numbers of the intervening top level and detailed design documents.  And to test cases.

      The tool collected all the "shalls" from the requirements doc, the function names from the code, etc., and allowed the user to connect them up.  It checked for completeness.  It stayed current by importing updated versions of the docs, code tests, etc.

      Another tool (ADT) automatically generated PDL (program design language, aka "pseudo-code") skeletons, or real code skeletons, from design diagrams.  Graphical drawing tools allowed you to create OO (Object Oriented) design diagrams in the style of Grady Booch and Ray Buhr.  And generate PDL or code skeletons from them. 

      Then you could fill in the details, and go the other way.  Generate detailed OO diagrams from the PDL/code.  Could even "close the loop", changing the generated skeletons and generating updated versions of the original OO diagrams.

      Another tool (IFTRAN) automatically parsed and checked PDL, and generated real code from it, with placeholders for you to fill in manually.  So you could write your detailed design document in a somewhat formalized, but generally English-like, PDL language, and have it generate the skeleton of the real code.

      We were very proud of these tools!

      Languages started coming with built-in tools to automatically generate detailed design docs.  For example, Javadoc and Pydoc.  See:

      --Fred

    3. Problem 3:  Requirements missing

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      Things got forgotten.  People didn't think of all of the requirements.  Or forgot to write them down.  Or accidentally said "will" instead of "shall".

      Solution:
      More tools.  Check for too short a document.  Check for too low a percent of "shalls" in the sentences.  Detailed time tracking to make sure the author or reviewer spent enough time on it.

      Also, manual reviews of the doc for completeness.

      --Fred

    4. Problem 4:  Requirements not accurate

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      It was hard to get people to pay enough attention.  When writing or reviewing the extremely long and boring requirements doc, they couldn't stay focused.  They failed to notice that critical parts were missing.  That the techniques and workflows didn't really cover all the needs of the users.  That they were clunky, error-prone and hard to use.  That lots of edge cases were omitted or glossed over.

      Solution:
      More tools.  Occasional random beeps, flashes, electric shocks to wake them from their boredom-induced coma.  (Not really, but hey, a toolsmith can dream, right?) 

      Also, copious amounts of caffeine.

      --Fred

    5. Problem 5:  Requirements not feasible

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      Some requirements were not feasible.  They didn't make sense.  Described a situation that was impossible to detect.  Or an action that was impossible to perform.  Or not fast enough on the available technology.  Or with the available resources.

      Solution:
      POCs (proofs of concept) and other prototypes.  Write little bits of code to prove it can be done.  Then write it into the requirements doc and throw the code away.  These were great fun to write!  Investigate some leading edge (or even "bleeding edge") technique or technology.  Prove it can be done, and throw the code away.  No need to maintain it, be "on call", carry a beeper, etc.  Cool beans!  We did LOTS of these.

      Also, more tools.  One tool (DAT) assessed the dynamic behavior of PDL or code.  No need to write all the detailed code before running it to measure performance.  No need to create a simulator and run it for hours and hours.  The tool created a "Petri Net" from the PDL/code.  And did a mathematical analysis to "solve" it.  Found bottlenecks.  Computed throughputs, response times, etc.  Ran in mere seconds. 

      It also worked for manually created Petri Nets that described any other system with a flow.  Not just the flow of execution through the threads of a program.  Could be the flow of parts through a warehouse or assembly line.  The flow of medicinal drugs through a body.  The flow of dollars through an economy.  Anything, really.  (I have to admit that a couple of PhD's handled the high level math.  Alex, Gianfranco, Hassan, and others from all over the world.  We toolsmiths just wrote the GUI editor for the Petri Nets, and the DB code to store them.)

      Another toolset (Ada/PDL) measured the complexity of the PDL or code.  Determined how many paths through the code, how many test cases required, etc.  Had a "lint"-like standards checker to catch things like missing else clause, unhandled exceptions, lack of documentation, and other configurable standards violations.  Its "pretty printer" formatted the code consistently, to make it easier to read.

      We were VERY proud of these tools!

      Plus lots of COTS tools (Commercial Off-The-Shelf).  The VAX/VMS PCA (Performance and Coverage Analyzer) watched your program as it ran its test suite.  Gave a report of performance.  How much CPU time was consumed by each function.  And by each line of code.  And how much clock time.  Also a report of coverage.  Which lines of code were tested and how many times.

      --Fred

    6. Problem 6:  Requirements became obsolete

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      Even with all these tools, it took a LONG time to get the requirements right.  Everything was always very late.  Sometimes years or decades late.  And very expensive.

      By the time you ironed out all the requirements, and designed, wrote, and tested the code, the world had changed.  The original requirements no longer applied.  The problem had gone away, so no solution was needed any more.  All your time and effort (and money) was wasted.

      Or some other vendor had produced a solution, and beaten you to market.  So no one wanted yours any more, even if it was much better.

      Or the requirements had become incomplete.  Suddenly the program also needed:

      1. To run on a PC or LAN, not a mainframe
      2. A GUI (graphical user interface), not just a CLI (command line interface)
      3. An API (application programming interface) to integrate with other programs
      4. "Internationalization" to support multiple languages
      5. To run in a web browser
      6. To run on a cell phone, or tablet, or watch
      7. A REST API to integrate with other programs in the cloud
      8. To accept voice commands
      9. To speak its results verbally
      10. To run as an Amazon Alexa "skill" on a smart speaker
      11. etc., etc., etc...

      Solution:
      None.  No one could have anticipated all the ways that the world might change.  Waterfall is too slow.  Need something faster and more steerable. Tools can't help here.  Except maybe a time machine to go into the future and see what changed.  (I'm working on it.  Just need a complete set of requirements... :-)

      --Fred

    7. Problem 7:  Software projects are hard to schedule

      Original Version: 12/1/1987
      Updated: 11/21/2009
      Last Updated: 5/6/2020

      "How long will it take?"  "How much will it cost?"

      Tired of being asked these questions for software projects?  Beating yourself up for not knowing?  Other projects are predictable.  Why not this one?

      Give this answer: 

        Software projects are fundamentally different.  They're necessarily custom, complex, poorly defined and never written before.  Anyone who predicts them accurately is a liar, a thief or incompetent.  Which of those do you want me to be?  Or you can withdraw the question and we'll do it Agile.

      Explanation:

      Physical objects can NOT be cloned
      Buildings, bridges and other physical objects can't be instantly cloned.  If you have one and need two, you build the second from scratch.  Reuse the design and techniques, but not the bricks, steel, wallboard, wires, pipes, etc.  In the construction industry, you have no choice.  You're forced to build the same thing over and over.  Over time, predicting becomes easy.

      Software CAN be cloned
      Software is easily cloned.  Infinitely copiable.  You never manually recreate an exact copy.  Just copy it.  No project needed.  Therefore all projects produce CUSTOM software that has NEVER BEEN WRITTEN BEFORE.  Predicting becomes harder.

      Simple software becomes hardware
      Software is generally flexible and complex.  If it's simple enough, it becomes hardware and gets mass produced.  Therefore all software is COMPLEX.  Predicting becomes harder.

      Well-defined software is generated
      Well-defined software can be generated automatically.  If you need simple variations on a well-understood software pattern, you don't write them manually.  You generate them instantly.  At little or no cost.  Via a parameterized software generator.  Therefore all custom software is POORLY DEFINED.  Predicting becomes harder.

      All software projects are custom, complex, poorly defined and never written before
      For these reasons, no one ever starts a new software project unless the software needed is custom, complex, poorly defined and never written before.  When it's not, they don't hire you to write it.  They just make a copy, use hardware or generate it.

      Not like the building construction industry
      Don't compare custom software development to the repetitive work done in other industries like the construction of buildings.  Don't accept the claim that accurate long-term scheduling should be possible.  The comparison is invalid.

      Liar, thief or incompetent
      Don't trust anyone who predicts exactly how long it will take to write a large piece of custom software.  They are a liar, a thief or incompetent. 
      Liar: They may be lying about the low time and cost.  Planning to find a way to charge more by re-negotiating later.  Charging extra for lots of "change orders", etc.
      Thief: If not lying, and can really tell you exactly how long it will take, they may have found a way to generate it.  Planning to generate it cheap and quick, but still charge as though they wrote it custom.
      Incompetent: If not planning to generate it, but have done it manually so often that they know exactly how long it takes, they're incompetent for never having automated it.  Planning to charge you for a lot of unnecessary manual work.

      What's the solution?  Don't try to estimate too far in advance.  Use Agile Software Development, not the old Waterfall model.  For more info, see the Agile row of my links page:

      --Fred

    8. Waterfall horror story

      Original Version: 3/31/1983
      Last Updated: 5/9/2020

      Here's the story of a massive Waterfall project that failed after 10 years.

      I worked at a large government contractor for a few years right out of college.  On a project to replace the entire US civilian ATC (Air Traffic Control) system.  Hundreds of us worked to gather all the requirements for the entire system, write them down, get experts to review them, etc.  We would then have proceeded to the top level design, detailed design, code and tests. 

      But, the requirements gathering was dragging on and on.  I got bored.  Within a couple months, I managed to get re-assigned to the PMO ("Program Management Office").  My job there was to quickly write little experimental bits of code to see what was possible.  So the PMO guys could decide what to commit to in a project proposal.  MUCH faster cycle time, and much more interesting work.

      Still, I kept getting pulled back into the main ATC project for "emergency" work.  For example, I had to go through all the requirements docs.  Find every numbered section like "1.3." that contained sub- sections like "1.3.1." and "1.3.2.", but no actual text of its own.  Add a standard paragraph that said "This section is composed of the following subsections". 

      I also had to find and fix all places where someone had accidentally left off the trailing period in a section number.  For example, change "1.3" to "1.3."  Mind-numbingly boring!  Of course, I automated most of it, so I could go back to the fun work at the PMO  But it was truly stupid work, of no value whatsoever!

      After 2 years, I transferred to the company's Software Tools group.  Was much happier for the next 3 years writing compilers, standards checkers, complexity metrics, pretty- printers, etc.  I loved it because they were quick and easy projects.  And because the users were my friends and colleagues down the hall.  So, lots of direct feedback.

      Then I moved to Virginia, and worked for 4 years for a software tools company.  Then I moved to the Philly area and started work in the software platform group of a company that wrote "hospital information systems" for use by doctors, nurses, etc.

      Finally, it happened!  The day came.  10 years after I'd started at that 1st company.  8 years after I'd left the ATC project.  I'd kept in touch with my old friends there.  I got an email saying the project was FINALLY finished!

      But no, they had NOT completed the waterfall, finally getting all of the requirements exactly right, then writing a top level design doc to cover all of the requirements, then writing a complete detailed design doc, then writing and testing the code.  Instead, they'd spent 10 years struggling to perfect the requirements doc.  So it covered all possible situations and could be formally approved by all the right government agencies, etc.

      But before they could get it right, the world changed.  New technologies came along (for software, for airplanes, for radar and other tracking/sensing technologies, for communications, etc.).  So they CANCELLED the entire project, without ever having delivered a single line of code.  Millions of dollars down the tubes!

      Boy was I glad I had bailed out early!  I'd spent the past 8 years doing Agile software development, delivering dozens of projects, thrilling my customers and end users daily, etc.

      --Fred

    9. The birth of Agile

      Original Version: 3/31/1983
      Last Updated: 5/6/2020

      After a while, people started to realize that Waterfall just wasn't very realistic.  Things always went wrong.  You always had to go back and start over.  Why bite off so much at once?  Why not try for smaller incremental successes instead of one big failure?  Why not do what the tools guys were doing on their many successful small projects?

      They started doing things like:

      1. Prototyping
      2. Rapid Prototyping
      3. Incremental Spiral Design
      4. RAD (Rapid Application Development)
      5. Agile
      6. etc.

      --Fred

  2. Agile software development

    Original Version: 3/31/1983
    Last Updated: 5/7/2020

    I'm a huge fan of what's now called "Agile" software development.  Or "Lean", "Nimble", etc.  An approach that some of us have been using for decades.  It's finally gaining a lot more traction for large projects in the past 10 years or so.

    With Agile, we write a little piece of the requirements.  Then do that piece of the design.  And write that piece of the code, tests and docs.  Typically all takes a week or two.

    Then we show that tiny piece to the users/clients to see what they think.  Then proceed to do the next little piece.

    I've been doing it this way for 36 years.  My entire career.  Because I realized early on that the Waterfall model was a disaster.  See:

    --Fred

    1. The spirit of Agile

      Original Version: 3/31/1983
      Last Updated: 5/7/2020

      I've been practicing the spirit of what's now called "Agile" very successfully since 1982.  It's had various names:  "Rapid Prototyping", "Incremental Spiral Design", "Rapid Application Development (RAD)", etc.  But it always boils down to:

      1. Talk to the users for a few hours to get an idea what they want.

      2. Sketch out the basic architectural layers in an hour or so.  I used to draw them as horizontal layers on a half sheet of 8.5x11 paper and pin it to my wall.  These days, use a simple diagram in any drawing tool.  For example:
        There might be a UI layer, a business layer, and a data storage layer.  Respect these at all times.  Never put code in the wrong layer.  Over time you may replace a layer with an equivalent layer that uses a better technology.  But always respect the currently defined layers. 

      3. Write a thin vertical slice of functionality that uses all the layers correctly.  And does some minor thing that the user wants.  Like logging in with a valid username and password and seeing a list of the top level business objects.  Pharmaceutical drugs, or patients, or recipes, or bank accounts, or houses, or whatever.

        The UI layer prompts the user for login credentials and passes them to the business layer.  The business layer checks them against rules like "required field", "must not contain injection attacks", etc.  It then accesses the data layer to confirm they match an existing user.

      4. Demo to the users within a week.  Get feedback.

      5. Encourage the users to accumulate a prioritized list of additional features that they want/need.  And to keep the list VERY dynamic.  Add/delete/change items whenever it occurs to them.  Hourly, daily, whatever.  Work with them to focus on true business needs so the priorities are right.

        For example, in a new app being developed by a startup with limited capital, the overall priorities are probably:
        1. Proof of Concept -- Is it even possible to add features that we'll need later?
        2. Features that allow the owner to demo to possible investors/users
        3. Features that allow the owner to give them direct access and not have the app fall on its face during their unguided tour
        4. Features to make it faster, more reliable, more supportable, more secure, access control for multi-user, etc.
        5. Features not worth demoing but needed for a real product.  Change my own password, cosmetic details, etc.

      6. Widen the thin vertical slice.  Pick an item from the top of the list.  Add more functionality that respects the layers, and does some incremental thing.  Like viewing the details of a top level business object.  Editing those details.  Computing some value.  Etc.

        Write one or more automated test cases to show that the new feature works properly.  Run the entire test suite to make sure nothing else broke.

      7. Demo to the users within an hour, day or week or so.  Get feedback.  Does the new feature make sense?  Work properly?  Need any change?  Send them back to the list to clarify the item you worked on.  And maybe add/delete/update other items to change your direction.

      8. Have a retrospective meeting with the team periodically to refine the process.  Any logistical improvements we can make to streamline things?  Are we doing anything unnecessary?  Something additional we should be doing?

      9. Lather-rinse-repeat:  Spend the next several weeks, months, or years, repeating steps 5-8.  At a rate that suits the cash flow of the client/employer.  Until the users can't come up with any more ideas that the client/employer wants to pay for.

      10. Declare victory:  Stop at any time.  With a fully working product, not a throwaway prototype.  It does some useful things, and does them correctly.

      --Fred

    2. Agile is like riding a bike

      Original Version: 11/6/2013
      Last Updated: 5/7/2020

      Agile software development is like riding a bike.

      Imagine someone says "ride your bike around the block and meet me back here".  That's easy.  Off you go...  See you in 5 minutes!

      But what if they ask you to plan it all in advance?  The exact route you'll take.  Where each wheel will be at each second.  To the nearest inch.  Exactly what angle you'll lean as you round each bend.  Etc., etc., etc.

      It becomes an impossible task.  They've turned it into Waterfall!

      --Fred

    3. Don't be "rigidly Agile"

      Original Version: 5/31/2006
      Last Updated: 5/7/2020

      One problem with Agile software development these days -- it's a victim of its own success.

      Once it became clear that the "Waterfall" model pretty much always failed, and the Agile model pretty much always succeeded, everyone started jumping on the Agile bandwagon.

      Unfortunately this included the "methodologists".  People who'd devoted their entire careers to writing out all of the gory details of EXACTLY how to do Waterfall.  Writing books about it.  Teaching seminars.  Coaching teams.  Etc.  When Waterfall fizzled, they did the same for Agile.

      Various Agile "methodologies" appeared:

      • XP -- Extreme Programming
      • Scrum
      • TDD -- Test-Driven Development
      • BDD -- Behavior-Driven Development
      • FDD -- Feature-Driven Development
      • etc.

      People tried them.  Decided what worked well and what was a waste of time.  Refined them.  Developed more Agile methodologies:

      • Lean
      • Kanban
      • Lean Startup
      • Scrumban
      • etc.

      All struggling to avoid too much process, too much ceremony, too many unnecessary steps.  Trying to focus on getting the actual work done.

      As described here, the basic idea should always have been:

      • Have the users maintain a VERY dynamic prioritized list of their current needs
      • Pick the top item off the list, write code for it and test cases to show it works as intended
      • Demo to the users VERY often (hourly, daily, at least weekly)
      • Learn from the users what makes sense, and change direction as needed VERY often (hourly, daily, at least weekly)
      • Have a retrospective meeting periodically to refine the process
      • Lather-rinse-repeat

      If the user wants an 18-wheel tractor trailer:

      • Give him a bicycle ASAP
      • Then a moped
      • Then a VW Rabbit
      • Then an SUV
      • Then a big pickup truck with a covered bed
      • Then add a U-Haul trailer
      • ...
      • ...
      • ...
      • Finally a full-sized 18-wheel tractor trailer.

      Demo to him at every step along the way.  At some point, he may discover that he doesn't need any more than what you've already delivered.  That a tractor trailer was overkill.  Great!  The project ends early and he's happy.

      Meanwhile, as soon as he sees you delivering a canoe, and promising a rowboat, a sailboat, etc., he should raise a red flag.  You're no longer headed for a tractor trailer.  You're veering off towards a cargo ship.  Unless you can convince him that it's a necessary and useful step towards a tractor trailer, he should steer you back on course.

      But the methodologists got involved and made a real mess.  They wrote books, gave seminars, whispered in the ears of senior management, coached teams, etc.  All about EXACTLY how to do Agile most "correctly".  They added lots of process, ceremony, meetings, checkpoints, metrics, etc., and missed the whole point of Agile.  Instead of flexible, lightweight and effective, they made it rigid, burdensome and useless.

      Soon we had even more Agile methodologies.  Many intentionally more heavyweight.  To be used by larger projects.  Why not do the obvious Agile thing?  Split each large project into multiple smaller projects.  Instead, we have:

      • SAFe -- Scaled Agile Framework (SAFe)
      • DAD -- Disciplined Agile Delivery
      • LeSS -- Large-Scale Scrum
        Not to be confused with:
        • less -- Unix utility to view a file
        • Less -- CSS preprocessor
        • LESS -- Lunar EScape Systems
      • Nexus -- Scaled professional Scrum
      • Scrum at Scale
      • Enterprise Scrum
      • etc.

      Yuck!!!

      These days, before I join an Agile project, I ask exactly WHICH Agile methodology they follow.  Some answer proudly with a single name like Scrum, TDD, BDD, FDD, Kanban, SAFe, etc.  I know immediately that I do NOT want to join the team.

      Others give a more flexible answer, like:

        "We started with Scrum, but found that the stand-ups were overkill, so lately we've been doing more of a Kanban-like approach.  We're looking into making it even more Lean.  We like some of XP's pair-programming, but not all day every day, and we generally prefer to pair a junior person with a mentor, rather than 2 peers.  We ALWAYS have a least a brief retrospective, as a way to find out what's working for us and what isn't".

      Now THAT I can work with! 

      --Fred

      1. Rigidly Agile horror story

        Original Version: 5/31/2006
        Last Updated: 5/7/2020

        Here's an example of how bad things can get when you adopt a methodology whole hog and become what I call "rigidly Agile". 

        I worked at a small bank in Delaware once.  They used the FDD methodology.  Their single biggest rule was that they would deliver new software into production EVERY OTHER FRIDAY AT 5PM, come hell or high water. 

        Nothing mattered but the schedule.

        Halfway through a typical cycle, they'd be scrambling to get all the features done and tested on time.  But time would be running short.  So, most developers quietly skipped writing tests.  Just hoped their code worked properly.  And didn't get around to offering their code up for review.  Or reviewing any code offered by others.

        Then on Weds or Thurs of the 2nd week, the entire team would meet.  At first, each developer would claim to be on schedule.  Then one would cave.  Would reluctantly concede that it wasn't all going to get done on time.  Once that gate was opened, others would concede that they might also be late.  Need another day or two to get it right. 

        But slipping the schedule was never an option.  Had to make the all important date! 

        So they'd decide to yank out some features.  No question of "It's mostly done and only needs an extra day or two, so maybe we should leave it in and release on Monday?".  No, out it came!.  People scrambled to remove/revert those parts of the code.  The release occurred on time Friday at 5pm.  Crisis averted!

        Predictably, bugs occurred over the weekend. 

        The features hadn't been removed/reverted exactly right.  There were no tests to catch the bugs before production.  They worked overtime Saturday/Sunday to fix it.  Often into Monday/Tuesday.  Finally got it working.  Pushed to production again.  Hoped none of the users around the world had been too badly affected.

        Then, without a retrospective, or even a pause to breathe, they dove into the next release.  Exhausted and already a couple days late. 

        Again, the tests/reviews of new features got quietly dropped.  And no time to go back for previous tests/reviews.  No one even admitted they were missing.  Onward!

        Again, stuff wasn't going to get done, so they quickly pulled out some features.  Delivered on Friday as scheduled.  But more bugs, more scrambling, etc.

        What a disaster!  I only worked there a few months.  Got out as soon as I could.

        --Fred

    4. First indicator of Agile success

      Original Version: 10/1/2014
      Last Updated: 5/7/2020

      Here's the first solid indicator of success I look for in any Agile project:

        The client/manager stops asking when it will be done.

      Yes!  They've shifted from a Waterfall mindset to an Agile mindset.  Accepted the truth of:

      This happens for 3 reasons:

      1. They can see exactly how fast we're going.  Whether sufficient progress is being made to manage their cash flow.

      2. They know we're doing the most important things first and can ship at any time.  Not just a prototype.  A viable product.  With perhaps more or less functionality than originally hoped.

      3. They appreciate being able to add new ideas on the fly as it occurs to them.  No longer think of the development effort as a cost to be minimized.  It's a lucrative investment to be maximized.

        When's the last time anyone ever said to their stock broker:

          "How much longer do I have to wait before I can pull my money out and stop making such a huge profit?"

      Always be transparent.  Make sure they can see the return on their investment.  Every day.

      --Fred

    5. "Agile Contract"

      Original Version: 1/22/2007
      Last Updated: 5/7/2020

      Since 2004, I've always offered my clients what I call an "Agile contract".  Basically it says:

      1. I'll work for a month, doing the things at the top of your list.

      2. I'll deliver an increment of working software at least every week or 2.

      3. Feel free at any time to change priorities.  To change your mind about how something should be done.  Or should have been done.

      4. I'll send you an invoice at the end of the month and want it paid 2 weeks later.

      5. When that invoice comes due at the end of the 6 weeks, you'll have seen at least 3-6 increments of working software.  Consider the following:
        1. Am I doing quality work?
        2. Moving fast enough?
        3. Moving in the right direction?
        4. Steerable enough when I take a step in the wrong direction?
        5. Do you know enough about what you want?  Able to point me in the right direction?

      6. If you answer "No" to any of these, even (e), don't pay the invoice, and we're done.

      7. Repeat the same process each month, until the most important thing on your list isn't worth paying for.

      This typically leads to a few years of work for a client who is very satisfied every single day.  So far, no client has EVER chosen to not pay an invoice.  They keep asking me to continue until they can't think of any more features worth paying for.  Or until they decide that they need to spend their money on marketing, advertising, corporate partnerships or some other aspect of their business, rather than further software development.

      Until then, they're thrilled to pay each invoice.  It takes a lot of the pressure off and makes the paperwork much less intense.  No long term contract, no all-inclusive requirements document, no fixed price, no change orders.  Just keep moving smoothly and quickly forward according to the ever-changing definition of "forward".

      --Fred

    6. Agile success stories

      Original Version: 3/31/1983
      Last Updated: 5/7/2020

      As proof that the "Agile contract" ALWAYS works, here are the LinkedIn recommendations that a few clients and colleagues have given me:

      Many of these are copied from my LinkedIn page, so you can see them there also.

      But since Microsoft bought it, there's no longer a way to link directly to a specific recommendation at LinkedIn.  So now I have to provide links to my own web site.

      --Fred

    7. Agile tool support

      Original Version: 3/25/2010
      Last Updated: 5/7/2020

      A small team can run like a well-oiled machine if it uses the right tools.  The client or manager, and the designated lead users or other stakeholders, can just sit back and receive email notifications from the various tools, like:

      1. Jira (or any bug/feature tracking system) sends email about a configurable set of events like:

        1. New bug reported by someone.  Could be a dev or tester, the PM, the lead user, some other user, the client who's paying for the project.  Anyone that Jira's been configured to allow to submit bugs.

        2. New feature requested by someone.  Same types of "someone" as above.

        3. Ticket (bug or feature) prioritized by someone.  Probably by a senior dev or tester, or the PM, or a lead user, or the client, but again it depends on who you tell Jira to allow to do it.

        4. Ticket scheduled by someone for the next release or the one after that.  Note:  It's a bad idea to start scheduling too far out in much detail.  Makes for lots of time and effort updating such schedules as priorities change, real world constraints have their effects, etc.

        5. Ticket assigned to a specific dev by someone

        6. Dev started work on ticket, adding a comment saying how he planned to do it and roughly how big an effort it seemed

        7. Dev marked a ticket as "Resolved"

        8. Tester marked a ticket as "Closed" or "Re-opened" because it passed or didn't pass tests.  And included comments about why, if re-opened.

        9. etc.

      2. Bitbucket (or GitLab, GitHub, or any version control repo site) sends email that a dev pushed a code change with comments about what ticket was fixed or partially fixed by the code change.

      3. Jenkins sends email saying it detected a code push to Bitbucket, so it rebuilt the updated software and ran all tests, but a test failed.  (The classic "Billy broke the build!" email that puts Billy in the doghouse until he fixes it.)

        Or that the tests all passed and the code was pushed to the test, QA or demo system, along with release notes, docs, etc.

        And later, that someone clicked to authorize the immediate or scheduled push into production.

      All of these tools can be configured.  Send as many/few emails to each person as they like.  Which types of notifications.  Which tickets (only those "assigned to me", only those for components A and D, only those for the upcoming release, etc.).  And so on.

      The tools all work together:

      • Git push triggers a Jenkins build
      • Jenkins pulls Jira ticket numbers from comments of Git push
      • Jenkins creates release notes from Jira ticket numbers
      • etc.

      The tools can generate high level management reports.  Can be viewed on demand by anyone authorized:  senior management, owners, investors, clients, users, etc.  Or emailed directly to them.

      • Graphs of number of tickets created/resolved over time
      • Number resolved by each dev
      • Number re-opened because they were not resolved quite right the first time around (by the team or by each dev)
      • Number reported by each tester
      • etc.

      All of these tools are either FREE or very cheap (maybe as much as $10/month), so cost is no issue.

      --Fred

    8. Beware WaterScrum

      Original Version: 11/8/2018
      Last Updated: 5/7/2020

      In 2018, a woman invited me out to lunch.  Had seen my postings at a local meetup, and was impressed.  Thought I'd be a perfect fit for tech lead of an Agile project at her startup.

      Over lunch, it quickly became clear that she wanted to do "WaterScrum".  Waterfall with some Agile stuff tossed in.  Waterfall with Scrums.  A rush through development without any of the discipline of true Waterfall or true Agile.  Daily meetings to push the devs to go faster.  But no dynamic wish list, iteration, user feedback, version control, bug tracking, CI/CD, documentation, etc. 

      I declined.

      So she asked me to take a lightly-paid, low-hours, advisory role.  Just keep her from going off the rails too badly.  She was a self-funded startup, low on cash, high on enthusiasm, hoping to strike it rich.  What the heck, why not?  If she's open to advice, maybe I can do some good here!  I took pity on her.  I helped her recruit a couple of part-time remote devs.  One in Texas.  One in California.  They could afford to work cheap outside of their normal 9-5 job.  Might get rich with her from stock options.

      Unfortunately, she'd already taken a couple of major Waterfall steps.  She'd hired a project manager, though she didn't expect to need more than 2-3 devs and expected the project to be done in 4-6 months.  Why?  And hired a design firm to define the exact requirements.  They'd used feedback from focus groups to design the "perfect" UI and workflow.  Had created mockups of all the screens.  Down to the nearest pixel.  Had spent almost all of her money.  Doh!

      She had no idea how badly she'd been used.  Was ready and raring to go.  Money mostly gone, but all the hard parts are done.  Just bring in a couple of cheap programmers to crank out the code.  Push to the app store.  Get rich!  Easy-peasy!  But money WAS getting a little tight, so she wanted to use Agile.  Didn't know much about it, but heard it was cheap.

      I tried to save her.  Told her things like:

      No luck!  She wasn't listening.  Didn't want my advice after all.  Really just wanted my resume to show to potential investors.  I pulled out.  All the devs she'd recruited got frustrated and left too.  Bummer!

      --Fred

    9. Self-organizing team -- no PM, no status reports

      Original Version: 3/25/2010
      Last Updated: 5/7/2020

      As I said to my WaterScrum client:

      OK.  You've hired your cousin Jennifer as project manager.  What exactly does that entail?  With only 2 part-time developers and an occasional advisor or two, it doesn't seem like there'd be much for a PM to do.  I was surprised when you first mentioned her at lunch, because I'd assumed you'd just do that yourself.

      A small team can run like a well-oiled machine if it uses the right tools.  See:

      That doesn't leave much need for status meetings, scrum meetings, a project manager, or any other such overhead.  Whoever wants to know the status of the project or an upcoming release, observe its trajectory, predict its likely end date, etc., can just sit back and read emails.

      Or can fire up a Web UI to explicitly view reports with charts and graphs, do queries, drill down into details, etc.

      If you like, you can ask each team member to write an explicit status report each week or each month.  Perhaps just 3 short bullet lists:

      1. Tickets I resolved
      2. Tickets I plan to work on next
      3. Any roadblocks I need help with

      But even that's overkill.  The tools and emails should make status (1) and priorities (2) obvious at all times.  And do you really want a dev to sit around doing nothing about a roadblock (3) until the next scheduled meeting or status report?  More likely, he'd just immediately fire off an email or a Slack.  Or reassign a Jira ticket from himself to the person he needed help from.

      Life is much simpler without a PM.  Greg and Jacob should look at the current "wish list" of Jira tickets.  (Or "backlog" or "draw-down list" or "user stories" or whatever you want to call it.)  They should each:

      1. Pick 1-3 tickets to work on.  Or have them assigned by you or a PM or someone.
      2. Give their best guess of whether each ticket should take an hour, a day or a week to implement.  If it's a week or more, split the ticket into smaller tickets and have them tackle just one part.
      3. Work on one or more of the tickets until they have a tiny increment of additional functionality working.  And a new test case.
      4. Push the code and test case to Git.  That causes Jenkins to kick off a build.  The build also runs the tests and deploys to a test, QA or demo environment.  And notifies you and any testers that a new version is ready to test or demo, what changed, etc.
      5. Move on to the next ticket.
      6. Lather-Rinse-Repeat!

      If we do a good job of specifying and assigning the tickets, Greg and Jacob can work pretty much independently.  Each cranking out a new ticket every couple of days.  Sometimes several tickets in the same day.  I've been known to sometimes do 10 or more in one long productive coding session. 

      Just watch for any ticket that drags on for a day or more past the estimate.  Probably means the dev is having a problem.  May need help.  Or may need to break the ticket into multiple smaller tickets.  There was more to it than originally thought.

      What about dependencies between parts that multiple people are working on?  How to coordinate their efforts?  Don't we need a 1917 Gantt chart?  A 1950s PERT chart (Program Evaluation Review Technique)?  A 1962 WBS (Work Breakdown Structure)?  A 1984 Microsoft Project file?  Some project management tool to show merging timelines, etc?  Lots of manual effort to do a coordinated release at the right time?

      No.  Just use Git "branches" to create multiple versions of the latest code.  Each dev creates a branch, does his work, commits his changes to his branch.  Uses the automatic Git "merge" tools to merge them into the main branch when ready. 

      When there are dependencies, one dev creates an additional branch for the integration testing.  Each dev merges his branch into that branch, and they test the combination.  When ready, they merge that branch into the main branch.  And then delete that branch.

      Branches are extraordinarily lightweight in Git.  Can be created, merged and deleted in seconds.  The same idea as in all the project management tools.  But they store the actual code.  Not just a paper trail that someone has to manually update.  Can even generate diagrams showing the branches, how they were forked, merged, etc.  High level management reports showing progress, etc.

      And of course, since Git's a version control system, nothing's irreversible.  If someone makes a mistake, you can always revert to a previous point in time.  Delete a branch that was accidentally merged.  Roll it back.  Recreate it from some or all of the original branches.  In the states that they were when originally merged. 

      MUCH more flexible and powerful than any project management tool.  Written by Linus Torvalds himself to manage changes by the 5,000 or so people, all around the world, working on the source code for Linux.

      What's left for a PM to do?  Wouldn't she be bored, just sitting back and watching the "self organizing team" run itself?

      --Fred

    10. No status meetings

      Original Version: 1/22/2007
      Last Updated: 5/7/2020

      With the right tools in place, there's no need for daily or weekly status meetings.  Also, no PM and no status reports.

      From 2017 to 2010, I worked for a woman (Sharon Flank) under the Agile contract that I'd just invented.  She was in Washington in DC.  I was 130 miles away, near Philly.  We only met in person 4-5 times total over those 3 years.  And never used the phone.  It was all handled by email, frequent software releases, and automatic notifications from tools.

      We kept a wish list that reflected her changing priorities over time.  She first needed features to allow her to do a careful demo to a prospective investor or customer.  Then features to allow her to just send them a link and have them try it themselves without any major problems.  Then features that allowed them to really use the product, doing boring but necessary things like changing their own passwords.  And occasionally, features like caching to make the increasingly complex functionality run faster. 

      She and I both added to the list, and agreed on priorities.  I picked items from the top of the list, and cranked them out at a rate that suited her current cash flow -- anywhere from 40 hours/week to 10 hours/week.  For 3 years.  With no status meetings.  The project finally ended when she couldn't think of any more features worth paying for.  Needed to use the rest of her budget on marketing, advertising, corporate partnerships, etc.  Not further software development.

      You can see the recommendation she wrote me at LinkedIn, or here.

      --Fred

    11. No fixed release schedule -- no poles in the ocean

      Original Version: 3/25/2010
      Last Updated: 5/20/2021

      Don't schedule a release every 2 weeks.  Have them AT LEAST EVERY 2 WEEKS OR SO.  Sometimes multiple releases per day, sometimes one that takes more than 2 weeks.  Don't stick rigidly to an arbitrary schedule like these guys did:

      They spent way too much time in meetings at fixed times every day or every week.  And stopping in the middle of the creative task of writing code to go to those meetings.  And planning what they were going to say in those meetings, in case they were called on to speak.  And sitting there bored for hours during parts of the meetings that didn't matter to them.  And following up on issues raised in those meetings. 

      All before finally getting back to the creative, detailed and important work that had been interrupted.  And then later, fixing all the bugs in the code that occurred because they'd been interrupted so many times while writing it.

      I got all of my real work done after 5pm, when everyone else had gone home.  See:

      They should have cancelled all the meetings and been truly Agile.  See:

      And they spent way too much time rushing to make a scheduled release, agonizing over being late, yanking out code that wasn't ready on time, putting the code back in later, and dealing with all the chaos that caused.  Those rigidly scheduled meetings and releases were what I call "poles in the ocean".

      Developing software is like swimming in the ocean.  The waves and tides can't hurt you if you just go with them.  Swim in the direction you need to go.  Let the waves wash over you.  Let the tides slow you down or speed you up.  Accept the fact that you may proceed faster or slower in any given hour or day.  As long as you're moving fast enough overall.

      Don't arbitrarily schedule fixed daily or weekly meetings.  Or fixed periodic release dates.  That's like inserting rigid poles into the ocean.  It makes the inevitable waves and tides very dangerous.  The swimmers get swept into them, breaking bones, crushing skulls, etc.  And the poles inevitably get pushed around (to new dates).  Or damaged (project cancelled).  Remove the poles and let the swimmers swim freely, safely, at full speed.

      That's how I always do Agile.  Why my clients are always so thrilled with my results.  Why they tend to stop asking "when is it going to be done":

      Looks like Google is getting the idea:

      --Fred

    12. Agile links

      Original Version: 7/6/1995
      Last Updated: 5/7/2020

      The Agile Software row of my links page contains links to lots of Agile resources:

      --Fred

  3. Software Quality

    Original Version: 3/31/1983
    Last Updated: 2/23/2021

    You can't do Agile without paying a lot of attention to software quality.  If you just hack and slash, it all falls apart fast.

    Each time you write an additional code snippet, you have to also:

    1. Choose variable names and code structures that make it as obvious as possible what the code does
    2. Add comments for anything that you couldn't make obvious
    3. Include code for:
      1. Data validation -- user enters invalid data
      2. Fault tolerance -- communication errors, etc.
        Examples:
        1. Donation was made, but the Internet connection was broken before the user got a confirmation code
        2. Money was accepted from a donor, but a DB error occurred when trying to credit it to the medical patient it was being donated to
        Have to be sure to either roll back the entire action, or record its status and resume it later.
      3. Other error handling
      4. Logging
      5. Security
      6. Privacy
      7. Scalability
      8. Performance
      9. etc.
    4. Write a test case
    5. Update the docs
    6. Check the code, and test case, and docs into Git or other version control
    7. Update the JIRA or other bug/feature tracking ticket that prompted the code change
    8. Update the Jenkins or other automated build, if necessary, to build the new code, run the new test, etc.
    9. etc.

    --Fred

    1. Structured Logging

      Original Version: 5/5/1999
      Last Updated: 2/22/2021

      Logging is critical to a successful app, Agile or otherwise.  Write enough detail to a log file to easily debug problems that occur.  Because problems WILL occur.  And to monitor the app in real time.

      Especially for web apps.  And even more when you have a huge number of relatively unknown users, who are not likely to report problems to you, and more likely to just bitch about your product or switch to your competitor.

      Here's how I like to do logging in all my apps:

      Logging of User Errors

      Here's a typical set of log lines when the user makes validation errors, like omitting required fields:

      BEGIN VIEW app1.donate
      . Validation errors:
      . . payment_type::This field is required.
      . . phone::This field is required.
      . All posted fields:
      . . address_1:123 Main St:
      . . address_2::
      . . amount:30.00:
      . . bank_account_name::
      . . bank_account_number:xxxxxxxxxxxx:
      . . bank_account_type::
      . . bank_name::
      . . bank_routing_number:xxxxxxxxxxxx:
      . . card_number:xxxxxxxxxxxx:
      . . city:Madison:
      . . country:United States:
      . . duplicate_ok::
      . . email:johnsmith@gmail.com:
      . . exp_month::
      . . exp_year::
      . . first_name:John:
      . . last_name:Smith:
      . . payment_type::
      . . phone::
      . . security_code::
      . . state:CT:
      . . zip:06443:
      . Showing donate page
      END__ VIEW app1.donate -- Elapsed secs = 0.0892720222473
      

      Note that it shows:

      1. All data sent by the user
      2. What validation errors occurred
      3. What action we took next
      4. The elapsed time to process the HTTP request

      Then the page gets re-shown to the user with the validation errors in red next to each field.

      Logging of Successful Actions

      When the user fills out the form correctly and re-submits, the log shows:

      BEGIN VIEW app1.donate
      . All cleaned fields:
      . . address_1:123 Main St:
      . . address_2::
      . . amount:30.00:
      . . bank_account_name::
      . . bank_account_number:xxxxxxxxxxxx:
      . . bank_account_type::
      . . bank_name::
      . . bank_routing_number:xxxxxxxxxxxx:
      . . card_number:xxxxxxxxxxxx8603:
      . . city:Madison:
      . . country:United States:
      . . duplicate_ok::
      . . email:johnsmith@gmail.com:
      . . exp_month:11:
      . . exp_year:2018:
      . . first_name:John:
      . . last_name:Smith:
      . . make_anonymous:False:
      . . payment_type:mc:
      . . phone:800-555-1212:
      . . security_code:123:
      . . state:CT:
      . . zip:06443:
      . BEGIN make_donation(John Smith, 9536): Checking for a concurrent donation in session 178b70c4c7fd8ec275572faa8f8e0efd
      . . make_donation(John Smith, 9536): DONATION_IN_PROGRESS: False
      . END__ make_donation(John Smith, 9536): Checking for a concurrent donation in session 178b70c4c7fd8ec275572faa8f8e0efd
      . BEGIN make_donation(John Smith, 9536): Establishing connection to Authorize server
      . END__ make_donation(John Smith, 9536): Establishing connection to Authorize server
      . BEGIN make_donation(John Smith, 9536): Submitting transaction to Authorize server
      . . BEGIN make_donation(John Smith, 9536): Gathering params
      . . END__ make_donation(John Smith, 9536): Gathering params
      . . BEGIN make_donation(John Smith, 9536): Submitting
      . . END__ make_donation(John Smith, 9536): Submitting
      . . BEGIN make_donation(John Smith, 9536): Checking response
      . . END__ make_donation(John Smith, 9536): Checking response
      . END__ make_donation(John Smith, 9536): Submitting transaction to Authorize server
      .       make_donation(John Smith, 9536): Transaction approved: 8087648987
      . BEGIN make_donation(John Smith, 9536): Gathering info before transaction
      . END__ make_donation(John Smith, 9536): Gathering info before transaction
      . BEGIN make_donation(John Smith, 9536): Doing database updates
      . . BEGIN make_donation(John Smith, 9536): Updating User
      . . END__ make_donation(John Smith, 9536): Updating User
      . . BEGIN make_donation(John Smith, 9536): Updating Profile
      . . END__ make_donation(John Smith, 9536): Updating Profile
      . . BEGIN make_donation(John Smith, 9536): Updating Contribution
      . . END__ make_donation(John Smith, 9536): Updating Contribution
      . . BEGIN make_donation(John Smith, 9536): Updating Payment
      . . END__ make_donation(John Smith, 9536): Updating Payment
      . END__ make_donation(John Smith, 9536): Doing database updates
      . BEGIN make_donation(John Smith, 9536): Sending e-mail receipt
      . . Sent e-mail To: ['johnsmith@gmail.com']
      . .             Cc: []
      . .            Bcc: ['email_copies@freds_client.com']
      . .        Subject: Donation Confirmation
      . .
      . . We gratefully acknowledge your generous donation.
      . .
      . . Please note that your bank or credit card statement will indicate
      . . that the donation was made to "freds_client...".  Donations are 100%
      . . tax deductible to the extent permitted by law.
      . .
      . . For information about freds_client, contact us at 
      . . support@freds_client.com.
      . .
      . .
      . . Bill Johnson
      . . Anonymous: No
      . . $30.00
      . .
      . . Total: $30.00
      . .
      . .
      . . YOUR INFORMATION
      . . John Smith
      . . 123 Main St
      . . Madison, CT  06443
      . . United States
      . . 800-555-1212
      . . johnsmith@gmail.com
      . .
      . . Credit Card Number: xxxxxxxxxxxx8603
      . . Expires: 11/2018
      . . Transaction ID: 8087648987
      . .
      . .
      . . (Sent from webprod1.freds_client.com)
      . END__ make_donation(John Smith, 9536): Sending e-mail receipt
      . Showing donate confirmation page
      END__ VIEW app1.donate -- Elapsed secs = 1.02749800682
      

      Note that it shows:

      1. All data sent by the user
      2. That there were no validation errors
      3. The BEGIN and END of each step we took with nested operations indented for readability
      4. A complete copy of the email receipt we sent out
      5. The elapsed time to process the HTTP request
      6. etc.

      Logging of Unexpected or Internal Errors

      When an unexpected error occurs, the log shows the error in the context of what was happening at the time:

      BEGIN VIEW app1.donate
      . All cleaned fields:
      . . address_1:123 Main St:
      . . address_2::
      . . amount:30.00:
      . . bank_account_name::
      . . bank_account_number:xxxxxxxxxxxx:
      . . bank_account_type::
      . . bank_name::
      . . bank_routing_number:xxxxxxxxxxxx:
      . . card_number:xxxxxxxxxxxx8603:
      . . city:Madison:
      . . country:United States:
      . . duplicate_ok::
      . . email:johnsmith@gmail.com:
      . . exp_month:11:
      . . exp_year:2018:
      . . first_name:John:
      . . last_name:Smith:
      . . make_anonymous:False:
      . . payment_type:mc:
      . . phone:800-555-1212:
      . . security_code:123:
      . . state:CT:
      . . zip:06443:
      . BEGIN make_donation(John Smith, 9536): Checking for a concurrent donation in session 178b70c4c7fd8ec275572faa8f8e0efd
      . . make_donation(John Smith, 9536): DONATION_IN_PROGRESS: False
      . END__ make_donation(John Smith, 9536): Checking for a concurrent donation in session 178b70c4c7fd8ec275572faa8f8e0efd
      . BEGIN make_donation(John Smith, 9536): Establishing connection to Authorize server
      . END__ make_donation(John Smith, 9536): Establishing connection to Authorize server
      . BEGIN make_donation(John Smith, 9536): Submitting transaction to Authorize server
      . . BEGIN make_donation(John Smith, 9536): Gathering params
      . . END__ make_donation(John Smith, 9536): Gathering params
      . . BEGIN make_donation(John Smith, 9536): Submitting
      . make_donation(John Smith, 9536): An error occurred, so we're cleaning up
      . make_donation(John Smith, 9536): No Authorize transaction to void
      . make_donation(John Smith, 9536): No User to clean up
      . make_donation(John Smith, 9536): No Profile to clean up
      . make_donation(John Smith, 9536): No Contribution to clean up
      . make_donation(John Smith, 9536): No Payment to clean up
      . BEGIN handle_error()
      . ERROR: app1.DonationException: Unexpected error during step: Submitting transaction to Authorize server
      . USER MESSAGE: Error: Donation failed.  Your credit card has not been charged.
      . KWARGS:
      . STACK TRACE:
      . Traceback (most recent call last):
      .   File "/var/www/django/app1/views.py", line 834, in donate
      .     make_donation(details)
      .   File "/var/www/django/app1/donate.py", line 1217, in make_donation
      .     raise DonationException(msg, e)
      . . NESTED EXCEPTION: urllib2.URLError:
      . . NESTED EXCEPTION ARG: [Errno 113] No route to host
      . . NESTED STACK TRACE:
      . . Traceback (most recent call last):
      . .   File "/var/www/django/app1/donate.py", line 346, in make_donation
      . .     patient_donation,
      . .   File "/var/www/django/app1/authorize_net.py", line 388, in auth_submit
      . .     auth_transaction = Transaction.sale(params)
      . .   File "/var/python27/virtualenvs/venv1/lib/python2.7/site-packages/authorize/transaction.py", line 8, in sale
      . .     return Configuration.api.transaction.sale(params)
      . .   File "/var/python27/virtualenvs/venv1/lib/python2.7/site-packages/authorize/apis/transaction_api.py", line 24, in sale
      . .     return self.api._make_call(self._aim_base_request('authCaptureTransaction', xact))
      . .   File "/var/python27/virtualenvs/hhl/lib/python2.7/site-packages/authorize/apis/authorize_api.py", line 57, in _make_call
      . .     response = urllib2.urlopen(request).read()
      . .   File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
      . .     return opener.open(url, data, timeout)
      . .   File "/usr/lib64/python2.7/urllib2.py", line 431, in open
      . .     response = self._open(req, data)
      . .   File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
      . .     '_open', req)
      . .   File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
      . .     result = func(*args)
      . .   File "/usr/lib64/python2.7/urllib2.py", line 1242, in https_open
      . .     context=self._context)
      . .   File "/usr/lib64/python2.7/urllib2.py", line 1199, in do_open
      . .     raise URLError(err)
      . Showing last_chance_error.html
      . END__ handle_error()
      END__ VIEW app1.donate -- Elapsed secs = 0.658673048019
      

      Note that it shows:

      1. All data sent by the user
      2. That there were no validation errors
      3. The BEGIN and END of each step we took with nested operations indented for readability, to provide a context for exactly where the error occurred
      4. The exact exception name and arguments
      5. The exact error message, if any, shown to the user
      6. A full stack trace for the exception, plus recursively, the name, arguments, and stack trace of any "nested" exceptions that were originally raised to cause this exception to be raised
      7. The elapsed time to process the HTTP request
      8. etc.

      Additional Columns on ALL Log lines

      Note that the 3 examples above have all been trimmed to focus on the logging features discussed above.  In reality, these and ALL lines written to the log file also contain the following additional columns, which I manually removed from the above examples:

      1. Date/time to the nearest millisecond
      2. Name of web app (in case the same log file is written to by multiple apps, which is useful if there's any chance one app can affect another)
      3. Version number of web app
      4. Username of user, if any, who was logged in to the app and made the HTTP request
      5. IP address of user
      6. Thread ID of the server thread handling the HTTP request
      7. HTTP method (GET, POST, HEAD, etc.)
      8. Whether it was a regular request or an Ajax request
      9. Exact URL requested, including all query params
      10. HTTP referrer (URL of previous page user viewed)
      11. User agent string (device, browser, OS, etc, the user is using)
      12. The number of indented BEGIN/END blocks of the current log line (to filter out unneeded details of actions that occurred)
      13. Raw millisecond counter (for simple subtraction to compute elapsed time between 2 log lines)
      14. Total Memory (RAM) usage by the web server (to detect and fix memory leaks)
      15. Total Memory allocated by the web server (unavailable to other processes on the same server computer)

      These additional columns are included on all log lines for the following reasons:

      1. To provide context when reading the logs:

        1. What time, app, version, device, OS, browser, user, URL, referrer, etc. did the action, got the error, etc?

      2. To support searching of the logs by the additional columns:

        1. What errors are occurring?
        2. Do we ever get this error with browser X?
        3. What actions did user Y do during a specific time period?
        4. What percent of requests are made by each browser and version? Is anyone still using MSIE10, or can we drop support for it? In anyone still using any of the Microsoft browsers, or can we drop them all?
        5. What requests, if any, take longer than 2.5 seconds to complete?
        6. What's the most/least RAM we ever use on the server?

      3. To support filtering of the logs by the additional columns

        1. You can view the entire log unfiltered to see a complete picture of what was happening on the web server when an error occurred.  Including all of the concurrent server threads that were handling dozens or hundreds of concurrent HTTP requests from different users, with all the detailed steps taken for each user intermixed to show exactly the order in which they occurred relative to each other.
        2. Or you can filter by username to view log lines for only the handful of concurrent HTTP requests from one user, as though there were no others users at that time.
        3. Or you can filter by thread ID to see all the details of one HTTP request start to finish, in isolation, as though it was the only thing happening on the server at the time.

      This makes it very easy to do things like:

      1. Read the logs, following the nested BEGIN/END structure as though you are stepping through the code that was executed.

      2. Understand what happened when an error occurred, in terms of the one isolated HTTP request, or in terms of the entire set of concurrent requests being handled by the server at the time.

      3. See how long things are taking.

      4. See what things use lots of memory, or cause gradual memory leaks.

      5. Set up automated filters to show things in real time, as they are happening.  For example, you can simply use the Unix/Linux/Mac commands "tail -F" and "grep" to:

        1. Show a real-time stream of errors, if any are occurring

        2. Watch memory leaking gradually over time, or suddenly spiking

        3. Monitor the actions being performed by a given user as she demos to a client.  I've sometimes set up 4 different monitors side by side, streaming what a user was doing at 4 different levels of detail:

          1. One showing all the gory details of everything that was happening on the server as she demoed, including actions taken by other users meanwhile.
          2. Another filtered to show only her actions.
          3. Another filtered to show her actions but only to 3 levels of BEGIN/END nesting.  Plus any errors of course, since they are all logged at level 1.
          4. Another filtered to show her actions but only to 1 level of BEGIN/END nesting -- the start and stop of each complete HTTP request.  Plus errors of course.

          I'd watch the least detailed monitor to see how the demo was going, knowing every action she took as she gave the demo to a prospective investor or customer.  And glance at one of the other monitors if necessary to see details.  And give her a quick phone call or text or "chat" message so she could make a mid-course correction during the demo if she seemed to be having any sort of problem, or seemed to be heading into a known danger zone.

          After the 1st such demo, she started referring to me as "Big Brother" because she never knew when I was watching!

      I've done this type of logging on every project since 2000 or so.  In Java and in Python.  As the years went by, and new tools came along, I've integrated this logging with Java's log4j, and with the native logging of Python and Django.

      --Fred