Search Unity

Topics covered

In the aftermath of all the features and goodies Unity 4.0 will give you, I’d like to bring you the story of how the quality assurance, or “testing” if you prefer that, has commenced and evolved since the release of Unity 3.5.

Back in February when we parted with Unity 3.5, the QA department consisted of 11 souls, although that is including me and as a manager I excel at doing nothing. We had been through a battle for getting 3.5 out, mostly going through the bug reports our beloved community had sent us. In other words, we were primarily reactive and while we take pride in our community, we are not proud to let them do the majority of our testing and as the user base increases steadily, being reactive was not going to scale for long.

Manual Testing

In March we used a Ninja camp for trying out different test case management systems, which was a week well spent. We had trials of the systems and we had our hearts set on QMetry for a lightweight system which fulfilled our needs for reporting and tracking execution.

With the TCM in place we had a month of manual test case push to fill up our system with the test cases we needed in all the areas of Unity. Or at least some of them, because we were still short staffed for having coverage on everything. Soon after followed Full Test Pass 1 (executing all manual test cases spread on all platforms) on Unity 4.0, where we spread 480 test cases over many of our platforms, making the total number of test points (combination of a test case and a platform) in this pass somewhere below 3000. We had some areas covered too much and others not covered at all, but that is the nature of work in progress and it was completely open eyed we did that.

During the summer we got better acquainted with the new features introduced in 4.0 and more work went into our manual test cases. Some test cases were automated, some were discovered to actually have been automated all along and naturally more were added along the way. At this point in time we also had a good staff of student workers to handle incoming incidents from the community, so the STEs (Software Test Engineers, see my previous post on Testing Unity for further explanation) could be better focused on structured testing. As a result we had a little over 600 test cases to be executed in Full Test Pass 2 in August, which unfortunately landed 1 week before Unite for different reasons. So we actually only finished half of our designated 2800 test points. We did still uncover about 100 bugs in 5 days, so it was still beneficial.

Going forward to the release we had scheduled a Full Test Pass 3 immediately following RC1. Our initial smoke tests on RC1 revealed that we would not be able to get good data from this build, so we postponed FTP3 one week. Up till FTP3 our STEs did a great job of preparing all test suites, assigning them and spreading the load on all QAs, so we were ready to go from first thing Monday morning in the FTP3 week. 800 unique test cases, spread on 2800 test points and with a magnificent effort we got them all, except 18, executed in 5 days. We uncovered 90 bugs, many of which have now been fixed.

A Full Test Pass is one of the structured approaches we use to test Unity. But it is just one of the many tools we have in our toolbox. The bugs we found in these test passes are very often bugs we would never find using other techniques or just by monitoring the incoming bugs. It is an essential part of unearthing the problems in the code for us and it is great for regression testing.

During all this time in 4.0 we also had several full weeks with a pure focus on exploratory testing by everyone in QA. If you think exploratory testing is a matter of sitting down and behaving randomly, you would be massively mistaken. It is lightweight in preparation, but there HAS to be preparation. You need to select the area to focus on, which type of testing you want to execute on the area and then focus for 1-2 hours on finding as many bugs. It finds many bugs you would never encounter in a structured test, so it compliments it perfectly and in cases where you have little documentation or the feature is unstable, this is a very good technique to use.


During the year we have made leaps in automation. We had a pretty good starting point from what the developers had done, but the frameworks needed some more love and we wanted to move some of the test cases from an old framework to a new, faster and more lightweight framework. This new runtime framework also has the benefit of being executed on all of our platforms, so one written test case will execute the code in all of the targets. This is tremendously important when you realize that our platforms are effectively different codebases through a series of defines.

The work on the frameworks started slowly before we shipped 3.5, but with only 2 guys in our toolsmiths team, it got off slowly. In May we started ramping up a new office in Odessa with the primary purpose of increasing our test team in size and talent. The Ukrainian market for testers is quite good and I have had previous good experiences by recruiting both STEs and SDETs in Ukraine. During the summer and fall we managed to hire 6 guys in the Odessa team, of which one is working with the Toolsmiths. This has had a tremendous impact on our ability to produce automation and improve the frameworks.

To really ramp up on our automation, we had an automation push just after Unite. Almost everyone in the QA team got a good chance to push in automated test cases with the help of our toolsmiths, SDETs started doing code reviews on each other and we saw a steady increase in the number of test cases. What is also very interesting about automation is that this type of testing uncovers a different kind of bugs during the development than the two types of manual testing mentioned above.. Essentially you need to be extremely specific in your input and the expected results, so it is all about details. This is an added benefit of doing automation that few are aware of, but we saw it during this push. Obviously the benefit of having regression testing running on our builds is the biggest part of doing automation, but it is worth knowing that there is a different kind of benefit from it as well.

The automation effort is ongoing and we will ramp up even further on it with the following releases, where I will post some more data about it.

Incoming Incidents

The change in focus to being proactive instead of reactive might have meant that we slipped our handling of incoming incidents reported by our users. Truth be told, we have not handled the majority of our 3.5 incidents after 4.0 went into beta. One of the major problems with the incidents we get reported from the publicly released versions is that the quality of the bug reports is quite low. Almost 9 out of 10 reports we process end up being something other than an actionable bug. Contrast this with having 60-80% of our closed beta bugs being actionable and you can see why we change focus. The other side of the equation is equally in favor of the latest version: There is a much higher chance of getting the 4.0 bug fixed! Testing is all about prioritizing the effort in a completely endless amount of possibilities and thus most of it changes to the current release.

With those words, I'm very proud to say that we have processed 97% of all incidents reported by public and beta testers during the 4.0 release. In other words, if you report a bug during a beta, it WILL be looked at! I can assure you that this sometimes surprises the guys who report the bugs, since we often ask them for more information and they respond by "Omg, I didn't think anyone would ever care".

Well, we do! We care a whole damn lot about it.

Unity 4.0

So we have now released 4.0. We are happy and proud. But we are also QA. We have found many bugs which were not deemed important enough for this release, which is a reality of any kind of software, but it is also a little sting in the heart of any QA. Nevertheless, we believe that this release has a higher quality than any previous version released and on top of this it has some magnificent new features.

During this release more than 1800 bugs have been fixed. 481 of these were bugs found in previous version, so plenty of love for existing features as well. 673 of the total 1860 bugs were found and reported by our users.

QA is right now moving focus towards the next version we are going to release, ramping up on the new features we need to test and adding much more automation to keep regressions from slipping into your hands. If you want to join our ranks in the effort, go to and see if you match one of our positions.

November 14, 2012 in Technology | 8 min. read
Topics covered