October 8, 2015
Recently stumbled across an awesome blog post from Peter Seibel @peterseibel the tech lead of Twitter’s Engineering Effectiveness group entitled Let a 1,000 flowers bloom. Then rip 999 of them out by the roots. It is written version of a talk he gave at the Facebook @Scale conference. It is a bit on the wordy side but there are some real interesting nuggets, a bit of insight into the history of Twitter and some very witty analogies. Here are a few of the highlights.
- We know how to build abstractions and modularize our code so that we can manage large code bases and how to deploy our software so it can handle the demands of millions or even billions of users. On the other hand, I’d argue that we don’t really yet have a good handle on how to scale that area that exists at the intersection of engineering and human organization—the place where groups like Engineering Effectiveness work.
- I think a big part of the problem is that we—as an industry—are not very good about thinking about how to make engineers effective.
- The Twitter EE motto is: “Quality, Speed, Joy”. Those are the three things we are trying to affect across all of Twitter engineering. Unlike that other famous triple, Fast, Cheap, Good, we believe you don’t have to pick just two.
- We know from Dune that fear is the mind killer. So how does fear manifest in the context of software development? I would say tech debt. Tech debt is the mind killer. Tech debt is the lack of quality. It slows us down. It makes us miserable.
- In order for engineering effectiveness engineers to be able to boost effectiveness across all of engineering, things need to be standardized.
- Your goal should be to pick the set of tools and processes you will support and support the heck out of them. Invest more than you probably think you need to and focus relentlessly on making the tools and processes you do support awesome.
- Finally there’s a psychological aspect to providing good tools to engineers that I have to believe has a really impact on people’s overall effectiveness. On one hand, good tools are just a pleasure to work with. On that basis alone, we should provide good tools for the same reason so many companies provide awesome food to their employees: it just makes coming to work every day that much more of a pleasure. But good tools play another important role: because the tools we use are themselves software, and we all spend all day writing software, having to do so with bad tools has this corrosive psychological effect of suggesting that maybe we don’t actually know how to write good software.
- We don’t even really know what makes people productive; thus we talk about 10x engineers as though that’s a thing when even the studies that lead to the notion of a 10x engineer pointed more strongly to the notion of a 10x office. But we’d all agree, I think, that it is possible to affect engineers’ productivity. At the very least it is possible to harm it.
All of this makes a ton of sense and is very complementary to two intersecting industry trends – DevOps and Dev in Test. If you agree that agile is at the heart of DevOps – operations and administration – engineering effectiveness is an enabler. A fundamental premise of DevOps is to minimize work in progress. Let’s extend that model to tech debt – minimize tech or mental baggage.
Similarly, Dev in Test are test engineers that are part of the development team. Again the idea is to allow the organization deliver value to customers faster. An engineering effectiveness group or even a single engineer is another set of hands to streamline the efforts of the main line development team.
My one quibble with Seibel’s assertions is the apparent questioning of the existence of the 10X engineer as if they are like the Loch Ness Monster. On the contrary, 10X engineers are as real as Murphy’s Law. Managers are well served optimizing their contributions any way that they can whether that be with the best available tooling, minimizing unnecessary activity (i.e., meetings), and anything that takes them away from the code.
January 14, 2015
The following is a summary of an email thread discussing Object Relation Mapping (ORM) Tools. In my experience developers hold strong opinions about ORM Tools. In a past life my organization used LLBLGen and the folks that were most informed on ORM tools had strong opinions that it was much better than both nHibernate and Entity Framework. As a conversation starter I provided two articles from December of 2013 and follow up from February 2014 comparing the various ORM / Data access frameworks. I wanted to see where my organization stood on the topic of ORM.
As expected there were strong opinions. I found that there were essentially two camps – believers and non-believers. Interestingly the group (of about 10 very well informed senior folks) were evenly split on their opinions as to whether ORM is worth the effort or not. Also very interesting was that there was little disagreement about the pros and cons of ORM.
The “Believers” are proponents of Microsoft’s Entity Framework. I am apparently the only one to have ever used LLBLGen. Somewhat surprisingly no one in the group had any significant experience with nHiberate. Some had some passing experience with micro ORMs Dapper and Peta Pocco. Believers say that the savings achieved by having a clean, very flexible data access layer code is worth the investment in the overhead in maintaining the ORM. Their argument is that investment in tuning the ORM is smaller than the productivity gains achieved from its usage.
This group believes that the overhead associated with maintaining an ORM tool does not justify the return on the investment. They believe that stored procedures connected to the database using custom data access layer code written in ADO.NET are best. Some have built code templates to help generate and maintain their Data Access Layer. This believe this really helps us on our efficiency while keeping full control on the code/execution.
Pros and Cons
There was broad consensus around the pros and cons of ORM – again based on experience with Entity Framework version 5 and 6.
|Relatively straight-forward. It has good default conventions and rules.
||Hard to fine tune EF (e.g. query optimization). In half cases it ends up writing SQL manually and executing it from EF context.
|Functional – it implements 3 approaches (code- model- database- first), inheritance support, eager and lazy loading.
||Not very good for complex models. SQL queries become very large (could be up to several pages) and hard to understand.
|Flexible. It’s possible to change conventions and rules; select only needed relations.
||Slow to fetch large datasets (thousands of rows).
||Not suitable for batch operations (insert, update, delete)
There are a range of problems where ORM would be a good solution and others where it would not. Small, relatively non-transactional applications seem to be a good fit. As the volume of data grows the value gap narrows to well-done hand crafted SQL. The tradeoff is obviously the cost of having simple changes take more time to implement and test than with something like EF.
ORM seemingly can be made to work for most applications – the question is at what cost. Hand coding SQL might not make sense for an application with hundreds of database tables. On the other hand ORM might not make sense for a highly transactional database. In the end my sense is that this comes down to people and your architect’s preference. The choice of an ORM is like choosing a platform – .Net MVC or Ruby on Rails, SQL Server or MySQL, Linux or Windows. While there are some people out there who can easily move between platforms in my experience developers have preferences and comfort zones. The choice of whether to use and ORM Tool and if so which platform to use is both an application and a personal decision.