Engineering Effectiveness

October 8, 2015

Recently stumbled across an awesome blog post from Peter Seibel @peterseibel the tech lead of Twitter’s Engineering Effectiveness group entitled Let a 1,000 flowers bloom. Then rip 999 of them out by the roots.  It is written version of a talk he gave at the Facebook @Scale conference.  It is a bit on the wordy side but there are some real interesting nuggets, a bit of insight into the history of Twitter and some very witty analogies.   Here are a few of the highlights.

  • We know how to build abstractions and modularize our code so that we can manage large code bases and how to deploy our software so it can handle the demands of millions or even billions of users. On the other hand, I’d argue that we don’t really yet have a good handle on how to scale that area that exists at the intersection of engineering and human organization—the place where groups like Engineering Effectiveness work.
  • I think a big part of the problem is that we—as an industry—are not very good about thinking about how to make engineers effective.
  • The Twitter EE motto is: “Quality, Speed, Joy”. Those are the three things we are trying to affect across all of Twitter engineering. Unlike that other famous triple, Fast, Cheap, Good, we believe you don’t have to pick just two.
  • We know from Dune that fear is the mind killer. So how does fear manifest in the context of software development? I would say tech debt. Tech debt is the mind killer. Tech debt is the lack of quality. It slows us down. It makes us miserable.
  • In order for engineering effectiveness engineers to be able to boost effectiveness across all of engineering, things need to be standardized.
  • Your goal should be to pick the set of tools and processes you will support and support the heck out of them. Invest more than you probably think you need to and focus relentlessly on making the tools and processes you do support awesome.
  • Finally there’s a psychological aspect to providing good tools to engineers that I have to believe has a really impact on people’s overall effectiveness. On one hand, good tools are just a pleasure to work with. On that basis alone, we should provide good tools for the same reason so many companies provide awesome food to their employees: it just makes coming to work every day that much more of a pleasure. But good tools play another important role: because the tools we use are themselves software, and we all spend all day writing software, having to do so with bad tools has this corrosive psychological effect of suggesting that maybe we don’t actually know how to write good software.
  • We don’t even really know what makes people productive; thus we talk about 10x engineers as though that’s a thing when even the studies that lead to the notion of a 10x engineer pointed more strongly to the notion of a 10x office. But we’d all agree, I think, that it is possible to affect engineers’ productivity. At the very least it is possible to harm it.

All of this makes a ton of sense and is very complementary to two intersecting industry trends – DevOps and Dev in Test.  If you agree that agile is at the heart of DevOps – operations and administration – engineering effectiveness is an enabler.  A fundamental premise of DevOps is to minimize work in progress.  Let’s extend that model to tech debt – minimize tech or mental baggage.

Similarly, Dev in Test are test engineers that are part of the development team.  Again the idea is to allow the organization deliver value to customers faster.  An engineering effectiveness group or even a single engineer is another set of hands to streamline the efforts of the main line development team.

My one quibble with Seibel’s assertions is the apparent questioning of the existence of the 10X engineer as if they are like the Loch Ness Monster.  On the contrary, 10X engineers are as real as Murphy’s Law.  Managers are well served optimizing their contributions any way that they can whether that be with the best available tooling, minimizing unnecessary activity (i.e., meetings), and anything that takes them away from the code.

Advertisements

Thoughts on ORM Tools

January 14, 2015

The following is a summary of an email thread discussing Object Relation Mapping (ORM) Tools.  In my experience developers hold strong opinions about ORM Tools.  In a past life my organization used LLBLGen and the folks that were most informed on ORM tools had strong opinions that it was much better than both nHibernate and Entity Framework.   As a conversation starter I provided two articles from December of 2013 and follow up from February 2014 comparing the various ORM / Data access frameworks.  I wanted to see where my organization stood on the topic of ORM.

As expected there were strong opinions.  I found that there were essentially two camps – believers and non-believers. Interestingly the group (of about 10 very well informed senior folks) were evenly split on their opinions as to whether ORM is worth the effort or not.  Also very interesting was that there was little disagreement about the pros and cons of ORM.

Believers

The “Believers” are proponents of Microsoft’s Entity Framework.  I am apparently the only one to have ever used LLBLGen.   Somewhat surprisingly no one in the group had any significant experience with nHiberate.  Some had some passing experience with micro ORMs Dapper and Peta Pocco.  Believers say that the savings achieved by having a clean, very flexible data access layer code is worth the investment in the overhead in maintaining the ORM.  Their argument is that investment in tuning the ORM is smaller than the productivity gains achieved from its usage.

Non-believers

This group believes that the overhead associated with maintaining an ORM tool does not justify the return on the investment.  They believe that stored procedures connected to the database using custom data access layer code written in ADO.NET are best.  Some have built code templates to help generate and maintain their Data Access Layer.  This believe this really helps us on our efficiency while keeping full control on the code/execution.

Pros and Cons

There was broad consensus around the pros and cons of ORM – again based on experience with Entity Framework version 5 and 6.

Pros Cons
Relatively straight-forward. It has good default conventions and rules. Hard to fine tune EF (e.g. query optimization). In half cases it ends up writing SQL manually and executing it from EF context.
Functional – it implements 3 approaches (code- model- database- first), inheritance support, eager and lazy loading. Not very good for complex models. SQL queries become very large (could be up to several pages) and hard to understand.
Flexible. It’s possible to change conventions and rules; select only needed relations. Slow to fetch large datasets (thousands of rows).
Not suitable for batch operations (insert, update, delete)

Net net

There are a range of problems where ORM would be a good solution and others where it would not.  Small, relatively non-transactional applications seem to be a good fit.  As the volume of data grows the value gap narrows to well-done hand crafted SQL.  The tradeoff is obviously the cost of having simple changes take more time to implement and test than with something like EF.

ORM seemingly can be made to work for most applications – the question is at what cost.  Hand coding SQL might not make sense for an application with hundreds of database tables.  On the other hand ORM might not make sense for a highly transactional database.   In the end my sense is that this comes down to people and your architect’s preference.  The choice of an ORM is like choosing a platform – .Net MVC or Ruby on Rails, SQL Server or MySQL, Linux or Windows.  While there are some people out there who can easily move between platforms in my experience developers have preferences and comfort zones.  The choice of whether to use and ORM Tool and if so which platform to use is both an application and a personal decision.

References

https://www.devbridge.com/articles/entity-framework-6-vs-nhibernate-4/

http://stackoverflow.com/questions/2891905/should-i-use-entity-framework-instead-of-raw-ado-net

http://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks-part-2

http://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks


Scaled Agile Framework (SAFe)

December 27, 2013

Implementing agile methods at higher levels, where multiple programs and business interests often intersect, has always been a challenge.  Consultant Dean Leffingwell, formerly of Rally Software and Rational Software, created a project management framework called the Scaled Agile Framework (SAFe) for applying agile principles at the enterprise level.

Scaled Agile Framework

At a high level SAFe is set of best practices tailored for organizations to embrace agile principles at the portfolio level.  Conceptually SAFe creates a framework whereby there is an integrated view and coordination between multiple different projects.  NB: The graphic on SAFe home page (see screenshot above) is clickable and itself is a terrific agile reference in of itself.

One of the best things about agile methodologies is that it is lightweight and self-directed.  High-level systems run the risk that they have more overhead than value.  On the other hand nearly every organization that has more than one product has the need for an integrated view of how projects fit together.  Indeed, it is not unusual to see senior managers disconnected from day-to-day operations struggle to see how pieces fit together or attempt to make invalid comparisons between teams such as story point velocity.

At the end of 2013 two of the market leaders in application life cycle management (ALM) are Rally Software and Microsoft.  Both Rally and Microsoft’s Team Foundation System (TFS) have wholeheartedly embraced the notion of portfolio management in the latest iterations of their respective products.

Rob Pinna of the Rally Development team has a great analysis of the SAFe here.  Similarly InCycle Software, a Microsoft Gold Certified ALM partner, recently did a webinar highlighting a customized version of a TFS template they used to demo the capabilities of TFS to support SAFe.


Thoughts on SDET

September 8, 2013

I was recently approached by a colleague about the concept of Software Development in Test.    These are developers who are building software used for testing.  Essentially the argument is that we need to move away from the idea of having two separate QA functions – a manual QA team and an automation team.  The “industry” (Microsoft most prominently) is moving towards 100% automation and QA engineers are now called “Software Engineers in Test” or SDET.

I reached out to a former colleague in Bangalore about his experience managing a group in Microsoft QA.  (He’s since left there and presently is a lead at another prominent organization.).  Here is what he told me:

MS has the concept of SDET ie software development engineer in test. What makes this unique is the blend of technical knowledge (language an coding skills) along with testing domain knowledge which would allow this role to contribute extensively in designing in house automation tools, frameworks, carry out white box testing at the code level and contribute actively to automation and performance testing.

I then did some reading on my own about SDET and learned a bit from the web.  Here are some of the links that I read:

My very first job was testing printers for Digital.  A lot of what we did was to write code that exercised every single mode that the printer would do.  For example, we wrote code in Pascal that generated PostScript by hand.  Some of our programs were added to a test automation tool called the “creaker.”  Others had to be run on their own.  This was 90% black box testing and we did miss some things but we were software engineers doing testing.  I get what you are saying that you want testers to be looking at the code and writing unit tests in addition to black box testing.

I come away from all of this thinking that SDET is really hard to pull off without serious organization commitment.  I could see it working if there was a larger test organization where this concept was institutionalized or if we had the testers reporting to Development.  On the other hand testing is not as effective as it needs to be. There is never enough automation (and enough people doing it) and more problematic product knowledge is typically lacking.


Bundling and Minification

September 1, 2013

Found a great post from from Rick Anderson about Bundling and Minification in .Net 4.5.  From the blog post:

Bundling is a new feature in ASP.NET 4.5 that makes it easy to combine or bundle multiple files into a single file. You can create CSS, JavaScript and other bundles. Fewer files means fewer HTTP requests and that can improve first page load  performance.

The (basic) implementation is fall down easy.  Create a BundleConfig class like so…

Capture

Then reference it from Global.asax.cs like this….

Capture


Visual Studio Code Metrics

April 5, 2013

I recently stumbled across a feature in Visual Studio called Code Metrics.  As the title would imply the feature calculates information about the quality of your code.  As with most things like this your mileage may vary and developer instinct will kick in when the results don’t make sense.  On the other hand this is a quick and easy to use tool to use which can give you a sense of where you may have an issue.  I found the feature to be very straight-forward to use.  You access it by right clicking in the Solution Explorer.

Image

After running it, five pieces of data are provided.  See the MSDN Reference for more detail.

Image

  • Maintainability Index (scale 0-100, where higher is better).  Good is 20 to 100, Warning is 10-19, and Issue is 0-9.
  • Cyclomatic complexity (lower is better).  This measures the number of independent paths through a program’s source code.  I remember from my Computer Science days that any given module with a value greater than 10 is unmaintainable.  Switch statements have the characteristic of driving up the CC metric but in practice are generally not hard to maintain.
  • Depth of Inheritance (lower is better).  The theory goes the more the inheritance the more difficult it may be to find where a given function is defined.
  • Class Coupling (lower is better).  Obviously the more one class can stand on its own the better and more maintainable it will be.  Ideally you want something to be loosely coupled with high cohesion.
  • Lines of Code (lower is better).  Again from the common sense department the smaller the module the easier it is to understand.

Some things to note.

  • The “X” icon exports the data to Excel
  • Functions rollup into collapsible rows.  In my example below a complexity of 133 is a rollup of all the underlying methods.
  • There is a useful filter function to find code that meets a specific minimum or maximum criteria.

This feature comes pre-installed in Visual Studio 2012.  For older versions I believe you may need to use a plug-in.


Hosting an MVC3 (with membership) application on EC2

February 4, 2012

One of my side projects was to get an MVC3 application that uses the Razor View Engine and Membership hosted on EC2 running Linux. I found some amazingly helpful resources along the way – particularly from Nathan Bridgewater at Integrated Web Systems.

Step one of the project is to get an EC2 instance prepped and ready.  Basically I followed the cookbook instructions on Bridgewater’s site – Get Started with Amazon EC2, Run Your .Net MVC3 (RAZOR) Site in the Clould with Linux Mono.

The exact commands I used:

Create new AMI ID ami-ccf405a5 and associate elastic IP (xx.xx.xx.xx)
sudo apt-get update &;& sudo apt-get dist-upgrade –y
wget http://badgerports.org/directhex.ppa.asc
sudo apt-key add directhex.ppa.asc
sudo apt-get install python-software-properties
sudo add-apt-repository 'deb http://ppa.launchpad.net/directhex/ppa/ubuntu lucid main'
sudo apt-get update
sudo apt-get install mono-apache-server4 mono-devel libapache2-mod-mono
cd /srv
sudo mkdir www; cd www
sudo mkdir default
sudo chown www-data:www-data default
sudo chmod 755 default
cd /etc/apache2/sites-available/
sudo vi mono-default (see mono-default, change IP address)
cd /etc/apache2/sites-enabled
sudo rm 000-default
sudo ln -s /etc/apache2/sites-available/mono-default 000-mono
sudo mv /var/www/index.html /srv/www/default
sudo vi /srv/www/default/index.html
sudo apt-get install apache2
sudo service apache2 restart
Test in a browser via IP address (you should see the default apache page)

My mono default:

# xx.xx.xx.xx is my Elastic IP address
  ServerName xx.xx.xx.xx
  ServerAdmin myemail@domain.com
  DocumentRoot /srv/www/default
  MonoServerPath xx.xx.xx.xx "/usr/bin/mod-mono-server4"
  MonoDebug xx.xx.xx.xx true
  MonoSetEnv xx.xx.xx.xx MONO_IOMAP=all
  MonoApplications xx.xx.xx.xx "/:/srv/www/default"

    Allow from all
    Order allow,deny
    MonoSetServerAlias xx.xx.xx.xx
    SetHandler mono
    SetOutputFilter DEFLATE
    SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary

    AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript

Step two is to test mono with a simple Asp.net page.  Put this file into /srv/www/default.  Edit with sudo and view via browser at http://xx.xx.xx.xx/test.aspx.

<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>ASP.Net Test page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
{
lblTest.Text = "This is a successful test.";
}
</script>
</head>
<body>
<h1>
This is a test page</h1>
<asp:Label runat="server" ID="lblTest"></asp:Label>
</body>
</html>
If problems are encountered check logs in /var/log/apache2/access.log or /var/log/apache2/error.log
Step three is to get MySql installed and tested with this simple application.
sudo apt-get install mysql-server
sudo apt-get install libmysql6.1-cil
CREATE DATABASE sample; USE sample;
CREATE TABLE test (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(25));
INSERT INTO sample.test VALUES (null, 'Lucy');
INSERT INTO sample.test VALUES (null, 'Ivan');
INSERT INTO sample.test VALUES (null, 'Nicole');
INSERT INTO sample.test VALUES (null, 'Ursula');
INSERT INTO sample.test VALUES (null, 'Xavier');
CREATE USER 'testuser'@'localhost' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILEGES ON sample.* TO 'testuser'@'localhost';
FLUSH PRIVILEGES;

Put this file into /srv/www/default. Edit with sudo and view via browser at

<%@ Page Language="C#" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="MySql.Data.MySqlClient" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>ASP and MySQL Test Page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
{
string connectionString = "Server=127.0.0.1;Database=sample;User ID=testuser;Password=somepassword;Pooling=false;";
MySqlConnection dbcon = new MySqlConnection(connectionString);
dbcon.Open();

MySqlDataAdapter adapter = new MySqlDataAdapter("SELECT * FROM test", dbcon);
DataSet ds = new DataSet();
adapter.Fill(ds, "result");

dbcon.Close();
dbcon = null;

SampleControl.DataSource = ds.Tables["result"];
SampleControl.DataBind();
}
</script>
</head>
<body>
<h1>Testing Sample Database</h1>
<asp:DataGrid runat="server" ID="SampleControl" />
</body>
</html>

Step four is to get the simplest possible MVC3 Razor application functioning on Ubuntu / EC2.  Again Bridgewater has a more detailed explanation of what to do at his website linked here.

  1. Go into Visual Studio 2010 and create a new project MV3 / Razor project making no changes to the default project template.
  2. Build it and locally.
  3. Ensure that these references are set to “copy local”: System.Web.Mvc, System.Web.Helpers, and System.Web.Routing
  4. Copy System.Web.Razor, System.Web.WebPages, System.Web.WebPages.Razor, System.Web.WebPages.Deployment into your application’s bin directory.  You will find these files in in C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies
  5. Publish the application to a scratch directory
  6. Copy the published application to your EC2 machine.  I used git bash to tar (tarr –zcvf aws.tar.gz *) the files as Bridgewater recommends but could not get scp to work so I ftp’d the file over.
  7. On the EC2 machine cd /srv/www/default; sudo mv /home/ubuntu/aws.tar.gz; sudo tar –zxvf *.gz; sudo chown –R www-data;www-data *; sudo chmod 755 *; sudo service restart apache2 restart
  8. Confirm working from browser by checking default IP address http://xx.xx.xxx
  9. NB: I had to hit refresh several times before the application would work.

Step five is to use implement membership using MySQL.

  1. On your Windows machine.  Edit the default controller and decorate it with the [Authorize] attribute.
  2. Edit your web.config shown below.  This is where it can get hairy.  If you want to run this locally on Windows you need to install the MySQL connector for .Net and Mono http://dev.mysql.com/downloads/connector/net/.  Make sure that you reference system.web.  On Ubuntu the application uses system.data.  The trick is to add them both so you can run the same code on Ubuntu and Windows.  Also notice that I’ve made database password clear text.  As Nathan notes this is not a good practice.
  3. On the Ubuntu machine Go into MySQL and create a database called membership.
  4. Deploy the application to EC2 and test the application using step 4.
<?xml version="1.0"?>

<!--
 For more information on how to configure your ASP.NET application, please visit
 http://go.microsoft.com/fwlink/?LinkId=152368
 -->

<configuration>
 <connectionStrings>
 <add name="Default"
 connectionString="data source=127.0.0.1;user id=aspnet_user;
 password=secret_password;database=membership;"
 providerName="MySql.Data.MySqlClient" />
 </connectionStrings>

<system.web>
 <compilation debug="true" targetFramework="4.0">
 <assemblies>
 <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 </assemblies>
 </compilation>

<authentication mode="Forms">
 <forms loginUrl="~/Account/LogOn" path="/" timeout="2880" />
 </authentication>

<!--NOTE that "hashed" isn't supported with the public release of MySql.Web 6.3.5 under
 Mono runtime. But I can't bring myself to share sample code that doesn't hash the
 passwords by default. 😉 The version included with this sample project is slightly modified to
 allow hashed passwords in Mono. I highly recommend checking out the latest version of
 MySql .NET Connector. http://dev.mysql.com

 Also, I found that you have to rebuild MySql.Data and MySql.Web
 using .NET 4.0 profile if you want it to work with Asp.Net 4.0 under Mono. This is a known bug and should
 be published in upcoming versions of the connector. -->
 <membership defaultProvider="MySqlMembershipProvider">
 <providers>
 <clear/>
 <add name="MySqlMembershipProvider"
 type="MySql.Web.Security.MySQLMembershipProvider, mysql.web"
 connectionStringName="Default"
 enablePasswordRetrieval="false"
 enablePasswordReset="true"
 requiresQuestionAndAnswer="false"
 requiresUniqueEmail="true"
 passwordFormat="hashed"
 maxInvalidPasswordAttempts="5"
 minRequiredPasswordLength="6"
 minRequiredNonalphanumericCharacters="0"
 passwordAttemptWindow="10"
 applicationName="/"
 autogenerateschema="true"/>
 </providers>
 </membership>

<roleManager enabled="true" defaultProvider="MySqlRoleProvider">
 <providers>
 <clear/>
 <add connectionStringName="Default"
 applicationName="/"
 name="MySqlRoleProvider"
 type="MySql.Web.Security.MySQLRoleProvider, mysql.web"
 autogenerateschema="true"/>
 </providers>
 </roleManager>

<profile>
 <providers>
 <clear/>
 <add type="MySql.Web.Security.MySqlProfileProvider, mysql.web"
 name="MySqlProfileProvider"
 applicationName="/"
 connectionStringName="Default"
 autogenerateschema="true"/>
 </providers>
 </profile>

<pages>
 <namespaces>
 <add namespace="System.Web.Mvc" />
 <add namespace="System.Web.Mvc.Ajax" />
 <add namespace="System.Web.Mvc.Html" />
 <add namespace="System.Web.Routing" />
 </namespaces>
 </pages>

<!--Don't forget to update this... I left it open to make it easier to debug.-->
 <customErrors mode="Off"/>
 </system.web>

<system.data>
 <DbProviderFactories>
 <clear/>
 <add name="MySQL Data Provider"
 description="ADO.Net driver for MySQL"
 invariant="MySql.Data.MySqlClient"
 type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data"/>
 </DbProviderFactories>
 </system.data>

<system.webServer>
 <validation validateIntegratedModeConfiguration="false"/>
 <modules runAllManagedModulesForAllRequests="true"/>
 </system.webServer>

<runtime>
 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
 <dependentAssembly>
 <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
 <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" />
 </dependentAssembly>
 </assemblyBinding>
 </runtime>
</configuration>