How Google tests Software

How Google tests Software

I've finished reading a book by James Whittaker entitled, How Google Tests Software.

I was looking at how other industries maintain high quality in the products that they deliver. Reading this book teaches that plus more. It shows not just the quality aspect of it but the Google culture as a whole. I've shared this to some and it started really good discussion points. We can see some similarities with what we're already doing and some that we could probably adopt. This is based on my interpretation of the book. If you want to learn more, I suggest to read it. It's an interesting read.

Some key points include:

·       Google's culture

  • Computer Science.. Engineering culture.. that is, a very big percentage codes
  • default meeting time is 25 minutes. With that constraint, everyone is "forced" to make the most of the meeting. The goal is communicated ahead and everyone comes prepared.
  • 20% of the time is spent on fun days.. Some call it "hack days". We call it, "innovate days" but the concept is the same. They are free to do anything they want 20% of their time. This facilitates innovation and creativity. It turned out to be productive too. In Google, Chrome and Gmail started as 20% project. They are now popular app/products of Google. 20% is mandatory whether they do anything productive or not.
  • Project Mobility -- Engineers move around to different projects. The result is that they have several General Purpose engineers who can code on web, desktop, data center, test, etc. Wouldn't it be great to have General Purpose developers who can code on BIOS, Emulators, UI, etc too? I think this is the goal of the Technical Rotational Program too, don't you think?

·       Quality in Google

  • Quality is owned by engineers. Test is part of engineering.
  • Test is unavoidable. It's all part of the system workflow. The infrastructure does not allow you to commit code without unit test code, and code reviews.

·       Engineering Productivity

  • The traditional QA team is called Engineering Productivity. The idea is NOT to do testing BUT to make development faster. This stems from the fact that a major cause of slow development is Bug dept and Quality dept.
  • Testers are assigned to product areas but report through Engineering Productivity. Testers are on loan to product teams and are free to raise quality concerns and ask questions
  • Having testers separate from Product team creates a balance of resources against actual and not perceived need.
  • The on-loan status of testers and movement from project to project ensure that good ideas move rapidly around the company.

·       Test Roles

  • Software Engineer (SWEs) are feature developers.
    • 100% coders. They create design docs, choose data structs, overall architecture
    • code in behalf of the product
    • write TDD, unit tests
    • own quality of everything they touch whether they wrote it, fix it, or modified it. If SWEs has to modify a function and that modification breaks an existing test or requires a new one, they must author that test.
  • Software Engineer in Test (SETs) are infrastructure developers
    • 100% coders. also a developer except his focus is testability and general test infrastructure
    • codes in behalf of the developers
    • Reviews design and look closely at code quality and risk
    • Refactor code to make it more testable and write unit testing frameworks and automation
    • partners with SWEs but are more concerned on increasing quality and test coverage than adding new features or increasing performance
  • Test Engineers (TEs)
    • not a manual tester. Does a lot of coding too but writes scenarios, scripts that submits input, and process output
    • codes in behalf of the user
    • does risk analysis, end-to-end test scripts, quality management 
  •  (screenshot from Google)

·       Code Review culture

  • All codes are shared. To maintain good quality in common code, Google adapts the concept of "committers" in the open source community. Everyone is a commiter, but use a concept called readability to distinguish between proven committers and new developers. Readibilities are credentials that designate an experienced, trusthworthy developer. They are language-specific. In Google, these are C++, Java, Python, and Java Script.
  • The SWEs create a Change List (feature code + unit test) commits it to Mondrian. Mondrian does automated checking against pre-submit rules such as passing the code style guide, and passing all unit tests. When the Change List passes, this is forwarded to Code Reviewers with respectable readabilities. Exchanges of messages are done back and forth between reviewer and committer.
  •   (screenshot from Google)

·       Test Certified (interesting read in the book. It explains how this was done.

  • Product teams are graded and certified. It's an internal CMMI in Google where they have people go around and audit to measure development team's capability to develop high quality software.
  • Good way in developing quality culture within the company. It differentiates a newbie from the respectable development teams.
  • Wouldn't it be cool to adopt this in our Scrums too.. maybe in the near future?
  •   (screenshot from Google)

 

  • twitter
  • fb
  • stumble
  • linkedin
  • reddit
  • email