Revisiting Augustine’s Laws

August 19, 2015

Augustine’s Laws is a collection of management insights first published in the mid-1980’s by former undersecretary of the Army and CEO of defense contractor Martin Marietta Norman Augustine. It contains 52 (one per week) “laws” of management that Mr. Augustine picked up in his many years working in government and the defense industry.  Each law is written in the form of a humorous vignette that is meant to stand on its own.  The book is still available via Amazon (though at a premium) and given its substantial enduring wisdom is surprisingly hard to find through the library system.

Most of the book is specific to government contracting circa late 20th century but some of the insights are just as applicable today as they day they were first written.  The canonical list of the laws are available at Wikipedia.  Here are some of the more interesting ones:

  • Law XV (aka Law of Insatiable Appetites) – The last 10% of performance generates one third of the cost and two thirds of the problems.
    • Corollary 1: The price of the ultimate is very high indeed. Sometimes it would seem that one might be better served by having a more of a little less.
    • This is very similar to George Patton’s statement – “A good plan, violently executed now, is better than a perfect plan next week.”
  • Law XXIII (aka Law of Unmitigated Optimism) – Any task can be completed in only 1/3rd more time than is currently estimated.
    • Corollary 1: If a schedule is three quarters complete only on third time remains
    • Corollary 2: When it comes to schedule adherence everything is relative.
    • Corollary 3: The sooner you start to fall behind the more time you will have to catch up.
  • Law XXIV (aka Law of Economic Unipolarity) – The only thing more costly than stretching the schedule of an established project is accelerating it, which is itself the most costly action known to man.
  • Law XXXV (aka Law of Definitive Imprecision) – The weaker the data available upon which to base one’s conclusion, the greater the precision which should be quoted in order to give the data authenticity.
  • Law XXXVII (aka Law of Apocalyptic Costing) – Ninety percent of the time things will turn out worse than you expect. The other 10% of the time you had no right to expect so much.
  • Law XLVIII (aka Law of Oratorical Engineering) – The more time you spend talking about what you have been doing, the less time you have to do what you have been talking about. Eventually, you spend more and more time talking about less and less until finally you spend all of your time talking about nothing.

The perspective of the book is that of a senior manager working on large defense programs in the late 1970s and early 1980s.  While there are certainly universal truths, much has changed in the intervening thirty years – particularly in the field of software development.  Today software is generally built incrementally by self-directed teams using a flavor of agile.  Most agile teams live by the credo that the best way to eat an elephant is one bite at a time.  Agile is popular not because the problems are any less changing – indeed application complexity is increasing not decreasing – but because it provides for predictability that simply is not possible with massive projects.

As interesting as laws are, the management observations in the last chapter are as relevant today as they day they were written – if not as pithy.

  • People are the key to the success in most any undertaking, including business.
  • Teamwork is the fabric of effective business organizations.
  • Self-image is as important in business as in sports. A corporate team must think of itself as a winner.
  • Motivation makes the difference.
  • Recognition of accomplishment (and the lack thereof) is an essential form of feedback.
  • Listening to employees and customers pays dividends – they know their jobs and needs better than anyone else.
  • Delegation, wherever practicable, is the best course.
  • Openness with employees and customers alike is essential to building trust.
  • Customers deserve the very best.
  • Quality is the key to customer satisfaction.
  • Stability of funding, schedules, goals and people is critical to any smooth business operation.
  • Demanding the last little bit of effort from oneself is essential – it can make the difference against competitors who don’t have the will to put out the extra effort.
  • Provision for the unexpected is a businessperson’s best insurance policy.
  • “Touch-Labor” – people who actually come into contact with the product – are the only certain contributors in any organization.
  • Rules, regulations, policies, and reports, and organization charts are not a substitute for sound management judgement.
  • Logic in presenting decision options, consequences, benefits, and risks is imperative.
  • Conservatism, prudent conservatism, is generally the best course in financial matters
  • Integrity is the sine qua non (indispensable and essential action, condition, or ingredient) of all human endeavors including business.

 


Thoughts on ORM Tools

January 14, 2015

The following is a summary of an email thread discussing Object Relation Mapping (ORM) Tools.  In my experience developers hold strong opinions about ORM Tools.  In a past life my organization used LLBLGen and the folks that were most informed on ORM tools had strong opinions that it was much better than both nHibernate and Entity Framework.   As a conversation starter I provided two articles from December of 2013 and follow up from February 2014 comparing the various ORM / Data access frameworks.  I wanted to see where my organization stood on the topic of ORM.

As expected there were strong opinions.  I found that there were essentially two camps – believers and non-believers. Interestingly the group (of about 10 very well informed senior folks) were evenly split on their opinions as to whether ORM is worth the effort or not.  Also very interesting was that there was little disagreement about the pros and cons of ORM.

Believers

The “Believers” are proponents of Microsoft’s Entity Framework.  I am apparently the only one to have ever used LLBLGen.   Somewhat surprisingly no one in the group had any significant experience with nHiberate.  Some had some passing experience with micro ORMs Dapper and Peta Pocco.  Believers say that the savings achieved by having a clean, very flexible data access layer code is worth the investment in the overhead in maintaining the ORM.  Their argument is that investment in tuning the ORM is smaller than the productivity gains achieved from its usage.

Non-believers

This group believes that the overhead associated with maintaining an ORM tool does not justify the return on the investment.  They believe that stored procedures connected to the database using custom data access layer code written in ADO.NET are best.  Some have built code templates to help generate and maintain their Data Access Layer.  This believe this really helps us on our efficiency while keeping full control on the code/execution.

Pros and Cons

There was broad consensus around the pros and cons of ORM – again based on experience with Entity Framework version 5 and 6.

Pros Cons
Relatively straight-forward. It has good default conventions and rules. Hard to fine tune EF (e.g. query optimization). In half cases it ends up writing SQL manually and executing it from EF context.
Functional – it implements 3 approaches (code- model- database- first), inheritance support, eager and lazy loading. Not very good for complex models. SQL queries become very large (could be up to several pages) and hard to understand.
Flexible. It’s possible to change conventions and rules; select only needed relations. Slow to fetch large datasets (thousands of rows).
Not suitable for batch operations (insert, update, delete)

Net net

There are a range of problems where ORM would be a good solution and others where it would not.  Small, relatively non-transactional applications seem to be a good fit.  As the volume of data grows the value gap narrows to well-done hand crafted SQL.  The tradeoff is obviously the cost of having simple changes take more time to implement and test than with something like EF.

ORM seemingly can be made to work for most applications – the question is at what cost.  Hand coding SQL might not make sense for an application with hundreds of database tables.  On the other hand ORM might not make sense for a highly transactional database.   In the end my sense is that this comes down to people and your architect’s preference.  The choice of an ORM is like choosing a platform – .Net MVC or Ruby on Rails, SQL Server or MySQL, Linux or Windows.  While there are some people out there who can easily move between platforms in my experience developers have preferences and comfort zones.  The choice of whether to use and ORM Tool and if so which platform to use is both an application and a personal decision.

References

https://www.devbridge.com/articles/entity-framework-6-vs-nhibernate-4/

http://stackoverflow.com/questions/2891905/should-i-use-entity-framework-instead-of-raw-ado-net

http://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks-part-2

http://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks


Agile Pre-mortem Retrospectives

June 6, 2014

Failure is Your Friend is the title of the June 4, 2014 Freakonomics Podcast.  The podcast interviews cognitive psychologist Gary Klein.  Klein talks about an interesting technique called the pre-mortem.  “With a pre-mortem you try to think about everything that might go wrong before it goes wrong.”  As I was listening to Klein talk about how this might work in the physical world and medical procedures it occurred to me that this might be a nice compliment to an agile software development project.

Most scrum teams do some type of post-mortem after each sprint.  Most of the literature today calls these activities retrospectives which has a more positive connotation.  (Taken literally post mortem means occurring after death in Latin.) After training exercises the Army conducts after action reviews, affectionately called “AARs.”  For informal AARs (formal AARs have a proscribed format that is expected to be followed) I always found three questions elicited the most participation – what went well, what did not go well, and what could have been done better.  This same format is often effective in sprint retrospectives.

A pre-mortem retrospective would follow a very different format.  It asks the participants to fast forward in time after the release and assume that the project was a failure.  Klein’s suggestion is to take two minutes ask each participant to privately compile a list of why the project failed.  He then surveys the group and compiles a consolidated list of why the project failed.  Finally, after compiling the master list he would ask everyone in the room to think up one thing that they could do to help the project.  Ideally the team is more attuned to what could go wrong and willing to engage in risk management.

In concept the idea makes a ton of sense.  I can see how it would force the team to be honest with themselves about risks, temper over confidence, and ultimately be more proactive.  On the other hand a pre-mortem is one more meeting and one more activity that is not directly contributing to the project.  I question if there is enough value to do a pre-mortem on every sprint, however, for major new initiatives it could be a useful activity.  I quickly found two references on this topic.

http://www.slideshare.net/mgaewsj/pre-mortem-retrospectives

http://inevitablyagile.wordpress.com/2011/03/02/pre-mortem-exercise/


Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.


Scaled Agile Framework (SAFe)

December 27, 2013

Implementing agile methods at higher levels, where multiple programs and business interests often intersect, has always been a challenge.  Consultant Dean Leffingwell, formerly of Rally Software and Rational Software, created a project management framework called the Scaled Agile Framework (SAFe) for applying agile principles at the enterprise level.

Scaled Agile Framework

At a high level SAFe is set of best practices tailored for organizations to embrace agile principles at the portfolio level.  Conceptually SAFe creates a framework whereby there is an integrated view and coordination between multiple different projects.  NB: The graphic on SAFe home page (see screenshot above) is clickable and itself is a terrific agile reference in of itself.

One of the best things about agile methodologies is that it is lightweight and self-directed.  High-level systems run the risk that they have more overhead than value.  On the other hand nearly every organization that has more than one product has the need for an integrated view of how projects fit together.  Indeed, it is not unusual to see senior managers disconnected from day-to-day operations struggle to see how pieces fit together or attempt to make invalid comparisons between teams such as story point velocity.

At the end of 2013 two of the market leaders in application life cycle management (ALM) are Rally Software and Microsoft.  Both Rally and Microsoft’s Team Foundation System (TFS) have wholeheartedly embraced the notion of portfolio management in the latest iterations of their respective products.

Rob Pinna of the Rally Development team has a great analysis of the SAFe here.  Similarly InCycle Software, a Microsoft Gold Certified ALM partner, recently did a webinar highlighting a customized version of a TFS template they used to demo the capabilities of TFS to support SAFe.


Thoughts on Agile best practices

August 2, 2013

I came across this article about Agile from David Starr which I thought had some really helpful insights.

  • There are a ton of agile methodologies – Scrum, XP, Kanban, lean, and the list goes on.  The best agile teams are indeed characterized by “just-enough process.” The key thing is that the team itself buys into what they are doing.  Similarly, the process needs to fit within the organizational structure.  Thrashing will occur if both of these conditions are not satisfied.
  • The three key tenants of agile are ship often, keep quality high, and solicit and respond to feedback.  In my view the trick to making this work is keeping breaking big problems down into smaller ones – just like we were taught in CS101.
  • A great question to ask is how long would it take to release just one comment in one code file to customers and how could that time be cut in half?  Extrapolating that question what would it take to cut a current sprint time in half?  Some organizations ship multiple changes per day.  Clearly that does not make sense for every organization but asking this question challenges the organization to understand what is not adequately automated.  Are there enough developer tests, is deployment sufficiently automated, are mechanisms in place to solicit stakeholder feedback built into the team’s workflow?

Thinking about what has worked for us we are most successful when / or most challenged when these do not happen:

  • Keep user stories as small as possible
  • The larger the story the more communication becomes necessary
  • Have a few metrics and reports that everyone can understand (e.g., velocity, story points, burn down charts)
  • Have regular demonstrations to end users
  • Ensure that each user story has clear acceptance criteria
  • Code reviews are done on complex functions / work done by less experienced engineers
  • Have a period of time where the end users can have hands on time with the product
  • There is automated regression testing
  • Time is allocated for keeping the code well organized (i.e., refactoring, commenting, etc.)
  • Time is allocated for prototyping new functionality or concepts

While I agree with most of what is in the article I think some of it is a bridge tool far.  For example, I fully agree with this statements.

  • Continuous integration (CI) grew from a novelty to a basic measure of professionalism. Pair programming found occasional root. Formal modeling fell into disfavor for all the right reasons.

I particularly appreciate the occasional qualifier about pair programming.  On the other hand I am not so sure that I agree with the statements.

  • Test-first practices evolved to being a de-facto design tool.
  • The understanding and use of design patterns resurged.

As I understand it test first is a component of XP which may or may not be a mainstream practice.  Similarly, while it well may be the case that design patterns are being used more frequently it may be the case that this is happening without developers actually giving it much thought.  For example, when a developer implements an Interface or an Abstract Class they are implementing the Abstract Factory pattern.  Similarly, the Observer pattern is commonly implemented in GUIs via event listeners.  Beyond the more common patterns and their implementations that are embedded in the language most the majority of the GoF patterns may primarily be the domain for architects, high-end programmers, and academics.


Visual Studio Code Metrics

April 5, 2013

I recently stumbled across a feature in Visual Studio called Code Metrics.  As the title would imply the feature calculates information about the quality of your code.  As with most things like this your mileage may vary and developer instinct will kick in when the results don’t make sense.  On the other hand this is a quick and easy to use tool to use which can give you a sense of where you may have an issue.  I found the feature to be very straight-forward to use.  You access it by right clicking in the Solution Explorer.

Image

After running it, five pieces of data are provided.  See the MSDN Reference for more detail.

Image

  • Maintainability Index (scale 0-100, where higher is better).  Good is 20 to 100, Warning is 10-19, and Issue is 0-9.
  • Cyclomatic complexity (lower is better).  This measures the number of independent paths through a program’s source code.  I remember from my Computer Science days that any given module with a value greater than 10 is unmaintainable.  Switch statements have the characteristic of driving up the CC metric but in practice are generally not hard to maintain.
  • Depth of Inheritance (lower is better).  The theory goes the more the inheritance the more difficult it may be to find where a given function is defined.
  • Class Coupling (lower is better).  Obviously the more one class can stand on its own the better and more maintainable it will be.  Ideally you want something to be loosely coupled with high cohesion.
  • Lines of Code (lower is better).  Again from the common sense department the smaller the module the easier it is to understand.

Some things to note.

  • The “X” icon exports the data to Excel
  • Functions rollup into collapsible rows.  In my example below a complexity of 133 is a rollup of all the underlying methods.
  • There is a useful filter function to find code that meets a specific minimum or maximum criteria.

This feature comes pre-installed in Visual Studio 2012.  For older versions I believe you may need to use a plug-in.