On GitPrime

Eric Lawler

May 27, 2020

Filed under “

Copy-pasting my old interview answers for archive here, since Gitprime’s lovely blog post has disappearedTechnically, it lives on in a butchered format under their new Pluralsight branding. Congrats on the acquisition, Ben and team.. I fully believed this when writing it three years ago and still believe it today.

Maybe I’ll reach enlightenment in another decade and understand why so many programmers believe it’s fundamentally impossible to measure their performance by examining their raw output (programming code). Until then, I’ll keep experimenting with metrics on how a team is performing using a variety of–gasp–hard numbers and data.

​When did you decide that hard data/metrics could be useful in leading your team?

When comparing ourselves to Lawn Love’s other business units, full of KPIs and dashboards for every imaginable metric, engineering felt left out. Our performance indicators were very fuzzy and it was difficult to use feelings and anecdotal information to guide reviews. We didn’t have a means to devise more effective strategies for shipping more features, faster, when we couldn’t measure our current performance.

After reading your compelling arguments for GitPrime versus less nuanced measurements of engineering productivity, we decided our GitHub data is a potential goldmine, ready to be exploited to level up our engineering game.

How did you implement (or introduce the idea of) GitPrime?

We dove straight in! The team was skeptical of using quantifiable data to measure programmers’ productivity, rightfully decrying the measurement of raw lines touched as a poor proxy for performance. We worked through the highlights from your blog posts on “gaming” GitPrime (ie, doing more, better work!) and how it doesn’t use naive assumptions to generate its reports. We even had the chance to let the team express some of their concerns to Travis and your team in a video chat.

At first, we would review GitPrime once a month. After a few months, it became obvious that utilizing the tool more frequently could create faster feedback loops and help us all keep our eye on the end goal–delivering more features to improve the business. What reports do you use?

We use the daily​ update report every day in our morning engineering standups. We review the leaderboard, project timeline, and snapshot reports once a month, with the whole team.

What has been the cultural impact?

What’s the saying, only measure what you want to improve? We’re creating a culture where poor performers can be identified and coached to success, rather than letting performance problems languish in the dark for months… or years.​ When everyone is aware of how the team as a whole is moving, there are more opportunities for more people to suggest improvements to our processes. Everyone wins when projects are better spec’d and processes streamlined so engineers can spend more time doing what they love most–writing code.

What measurable improvements have you seen?

In our initial roll out of GitPrime, we shared your observations on commit frequency–that more frequent, smaller commits (“small, fast bites” were Travis’s words) strongly correlates to a higher overall throughput–to our team. As a result of changing our behavior and actually having a way to measure our progress, commits per day increased by 50% and time to 100 lines of productive code decreased by a similar amount, three months after launch.

18 months later, we’re still aligning our engineering processes with the hard data provided by GitPrime’s reports. There’s always room for more improvement, and we finally have the tools needed to recognize those areas of opportunity.


A never-published followup they did the next year with my most-senior engineer surfaced this gem of a quote:

I was somewhat skeptical at first, to be honest.

After getting in the habit of committing more frequently, and more importantly, consistently pumping out work, my [GitPrime metrics have] increased dramatically. I feel this has generally reflected my ability to output code effectively - not just in raw lines, but actually finishing discrete tasks. I used to fall into the trap of equating working longer hours with outputting more work, but now I am leaving work earlier in the day and accomplishing more in a shorter amount of time.

And that, dear reader, is what I call the very definition of success. Getting more productive work done in a day through the proverbial working smarter, not harder? As the kids say, that’s “😍.”

On “Agile” Story Points

Eric Lawler

March 16, 2020

Filed under “

The question

Hi! I was wondering how you guys do Story Pointing? Do you follow the fibonacci sequence? If not, what do you do?

- Senior Business Systems Analyst

An innocent-enough question, yes? But this question has created more bitter discussions than almost any other philosophical discussion of business software development.

I dashed off this quick email in response, but wanted to post it to The Greater WorldWritten while the world is grappling with the Wuhan Coronavirus Epidemic, 2020. in the hopes that someone will read something lacking in my response… and quickly correct me. Please, send any semi-articulate thoughts to my first name @ this domain.


Ah, this is truly a question for the ages.

Silversheet, like my previous engineering team, uses the fibonacci sequence for story points. But, like all teams, we do it “wrong” in the sense that we’re really just using it as a proxy for hours. “Hmm, I think that will take me half a day to complete–3 points.” or “Gee, sounds tricky. That’s probably a 2-ish day ticket? Probably? 8 points.”

I recently read this treatise on the subject from one of the Netlify pro[ject/duct] managers (a fast-growing tech company) and loved it. Using something wacky to break the link between points, which are supposed to measure complexity, and the usual amount of time required to complete a task sounds ingenious.

More importantly, in my experience, is to agree to a hard limit on how big a task can be before it gets decomposed. Basically, as I’m sure you’ve observed, the larger our estimation of complexity or time, the bigger that over/under on estimation gets. Woody Zuill, the “discovered” of mob programming principles, has a vicious exercise he does on software estimates. (Any of his essays on how useless estimates are might prove thought-provoking.) In his exercise, he has everyone time themselves on filling out a tic-tac-toe grid with numbers from 1-9 without repeating a number. Then he asks everyone what their estimate is on how long it would take to do the task again. Estimates are all in the range of 8-15 seconds.

So he has everyone do it again–oh, but, hang on, there’s a slight difference this time. The top row needs to add up to 5 and the left column needs to add up to 7. Immediately, this little constraint blows up everyone’s times. He repeats it a few time, then ends by introducing an impossible constraint. He wrote a program to detail every possible combination of 1-9 in a 3x3 grid, then ask for you to create the grid with a series of constraints that are literally impossible–but you wouldn’t know they’re impossible until you write a similar search algorithm to exhaust all the possibilities… Posing as the business user, he would continually ask “Tell me why this is different! It’s still just writing 5 lines and 9 numbers! How can this task possibly take ten minutes when you just told me it would take 8 seconds?”

To mitigate the classic “Why can’t we ever seem to estimate correctly?” you can clamp story sizes to enforce nothing larger than a 3-point taskMost people aren’t bold enough to do this, so it’s more common to see an 8-point limit. 8’s too big.. Then, when your 3-pointer runs aground on the craggy rocks of Reality, it slumps into what you’d expect a 5-point ticket to take, but it’s not the end of the world.

But when you’ve secretly packed three 3-point tickets masquerading as an 8-point task, you can run the risk of hiding 21-points of complexity (and time) in the innocent, not-well-understood task: How those 3 tasks combine can create a lot more complexity than tackling them in isolation.