Search Unity

Testing Unity part 2

May 24, 2012 in Technology | 6 min. read
Placeholder image Unity 2
Placeholder image Unity 2
Share

Is this article helpful for you?

Thank you for your feedback!

Time for the second installment of what testing Unity is like (first one here). As mentioned in the previous post we have STEs (Software Test Engineers) working close with development teams on building high quality features, and SDETs (Software Development Engineers in Test) working with STEs and development teams promoting testability and making tools & frameworks for testing Unity. In this post I want to go more into details about the specific work we do and the tools we use.

Manual testing

The primary tool for any structured manual testing effort is the test case management system. Often the tool of choice is a spreadsheet, but once you get to more than a few people, you need a real tool for the job. After trying out a lot of different products, we have chosen a system called QMetry. Like I mentioned in the previous blogpost, the manual test has a lot to do with figuring out how to get coverage on an area and reporting to the development team on status. It's all about visibility and feedback and using a tool like QMetry helps us track both. From within one place, we can organize the testing needed, prepare a new cycle of testing, track defects into our bugtracker and make reports for all to see how each part of Unity is being tested and what types of bugs come in.

Besides the very structured testing, we also use a good amount of time doing exploratory testing to just find bugs in new features. It’s an artform in its own rights to attack an application and find bugs. Not everyone possesses the ability to do this effectively, but it is of incredible value. The most extreme form of exploratory testing is a bug bash, where everyone in the development pair up and try to find as many bugs as possible. The team finding the best bug and the team finding most bugs are rewarded with a very edible and/or drinkable award.

It’s also worth mentioning that we have full integration between our bugtracker (FogBugz) and our source control system (Mercurial) meaning that we have traceability from bug-report to fix in sourcecode. This enables us to always know when and where fixes can be verified in different versions of Unity. We also build continuously and test on daily builds.

Automation

Much of the testing we do today is automated. For regression testing we have different frameworks which can help us make different types of test cases. For small, isolated tests of the Unity Runtime we have the Runtime Test Framework. With this framework we can produce very small, very fast to execute test cases which target a very specific part of Unity. It is capable of running the test cases on all of our targets, so one test case will be able verify the working on all supported platforms. This is of extreme value, since it gives very good coverage with little effort, and it is only possible because all platforms are required to implement an interface which enables us to communicate the same way to all of them. Further, the framework works like a sandbox so it is very easy to use and hard to do stupid things in.

For larger tests we have the Integration Test Framework. With this we can make a larger scope in the test case, usually taking longer to execute and has a broader coverage. One example could be: Build an assetbundle, setup a webserver and deploy the assetbundle here, start Unity, build a player, download the assetbundle etc. The Integration Test Framework is not born with multiple target execution, so some bit of extra work needs to be done to utilize a test case on different platforms. Also, the framework is not sandboxed like the Runtime Test Framework, so it is somewhat easier to do things such as making processes hang, forgetting to release shared resources etc. This is the price for greater freedom and flexibility.

On the highest level we have our regression rig. Very simply put, this thing can run a set of pre-recorded webplayer games on any given changeset and then compare the output (screenshots, audio, logs) to any other changeset, effectively giving us a picture close to realtime of how much we broke existing games with the latest codebase. On top of this, the rig has a bisect feature that can pinpoint which exact changeset caused a regression by tracking down the change automatically. Obviously there’s a whole lot more to the details about this rig is working, but maybe that will come in a later post.

The majority of test cases we have today have been written by the developers of each feature, which is a very good sign. As we staff up in QA there will be more test code written directly by developers in test, they will also make sure the entire suites are working well together and that tests are written in the most suitable framework.

Tools

Tools are extremely important for an efficient development department and we have small team of 3 developers in QA working primarily with tools and frameworks.

We have what we call the Callstack Analyzer. This tool can extract the callstacks from the crash reports that the community files. Every time Unity crashes, you get a dialog with a request to file in a bit of information about what you were doing and then send us the project and the callstack. This callstack is then analyzed, broken into blocks which we have previously identified as being either Unity, Mono, windows OS, Mac OS etc etc. Then we match each block to all the other callstacks we have and we can identify those where we have duplicates on specific parts of the stack and start investigating them. This is where the additional information you give us comes into play, since the callstack itself rarely gives us a direction to look at; a callstack with 20 duplicates and 1 reproducible bug report is a great catch. So please press that button on a crash, even if you don’t want to write anything; even the callstack itself can be of value to us.

We have some additional tools for e.g. for processing bugs, which resides on our servers. Tools for tracking our test projects, making code coverage analysis, making reports on cyclomatic complexity etc, are a part of the toolset we can bring in regularly to get a complete picture of how Unity is doing in the current development cycle.

More to come...

I promised you the bug reporter in post 2, but I’ll leave it for the next installment of Testing Unity. In the meantime, go check out our job openings at http://www.unity3d.com/company/jobs.

May 24, 2012 in Technology | 6 min. read

Is this article helpful for you?

Thank you for your feedback!