Search Unity

Verifying the scripting docs - Fun with EditorTests

August 18, 2017 in Technology | 14 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

When I encounter an API in Unity that I am unfamiliar with the first thing I (and most of us) do is go to the Unity Scripting API Manual to see how it works via one of the examples. If that example will not compile when I try it, I assume that I must be doing something wrong. The example couldn't possibly be broken, could it...?

This is how I discovered that we do indeed have examples in our scripting docs that do not compile, as a result of API changes over time and the odd case of an the example never compiling to start with. At Unity we have a lot of freedom in how we work; if we see a problem we can report it to the relevant team or fix it ourselves. At one of our recent Sustained Engineering team weeks we decided to do our own hackweek and picked several issues we wanted to tackle. Some of us chose to look into a solution for there being broken examples in the scripting docs.

There are about 15,000 scripting docs pages. Not all of them contain examples (a different problem which we are working to improve); however a large portion do. Going through each example and testing them manually would be unachievable in a week. It would not solve the problem of API changes or broken examples being written in the future either either.

Last year as part of the Unity 5.3 release we included a new feature called the Editor Test Runner. This is a unit test framework that can be run from within Unity. We have been using the Editor Test Runner internally for our own automated tests since its introduction. I decided to tackle the problem using an editor test. All our scripting docs are stored in XML files which we edit through an internal Unity project.

The code to parse all these files is already available in this project so it made sense to add the editor test into the same project so we could reuse it.

In our editor test framework (which is using NUnit) there is an attribute that can be applied to a test called TestCaseSource. This lets a test be run multiple times with different source data. In this case the source data would be our list of script examples.

public class ScriptVerification
{
    public static IEnumerable TestFiles
    {
        get
        {
            // Get all the xml files
            var files = Directory.GetFiles("OurDocsApiPath/*.mem.xml", SearchOption.AllDirectories);
            // Each file is a separate test.
            foreach (var file in files)
            {
                string testName = Path.GetFileName(file).Replace(k_FileExtension, "");
                yield return new TestCaseData(file).SetName(testName);
            }
        }
    }

    [Test]
    [TestCaseSource("TestFiles")]
    public void TestDocumentationExampleScripts(string docXmlFile)
    {
        // Do the test
    }
}

Using this method now shows a list of all the tests that will be run in the test runner. Each test can be run individually or they can all be run using the Run All option.

To compile the examples we use CodeDomProvider. It allows us to pass in one or more strings that represent a script, and it will compile and return information on errors and warnings.

This is a cutdown version (XML parsing removed) of the first iteration of the test:

using UnityEngine;
using NUnit.Framework;
using System.CodeDom.Compiler;
using System.Collections;
using System.Reflection;
using System.Xml;
using System.IO;
using UnityEditor;

public class ScriptVerification
{
    public static IEnumerable TestFiles
    {
        get
        {
            // Get all the xml files
            var files = Directory.GetFiles("OurDocsApiPath/*.mem.xml", SearchOption.AllDirectories);

            // Each file is a seperate test
            foreach (var file in files)
            {
                string testName = Path.GetFileName(file).Replace(k_FileExtension, "");
                yield return new TestCaseData(file).SetName(testName);
            }
        }
    }

    CodeDomProvider m_DomProvider;
    CompilerParameters m_CompilerParams;

    [SetUp]
    public void InitScriptCompiler()
    {
        m_DomProvider = CodeDomProvider.CreateProvider("CSharp");
        m_CompilerParams = new CompilerParameters
        {
           GenerateExecutable = false,
           GenerateInMemory = false,
           TreatWarningsAsErrors = false,
        };
        Assembly unityEngineAssembly = Assembly.GetAssembly(typeof(MonoBehaviour));
        Assembly unityEditorAssembly = Assembly.GetAssembly(typeof(Editor));
        m_CompilerParams.ReferencedAssemblies.Add(unityEngineAssembly.Location);
        m_CompilerParams.ReferencedAssemblies.Add(unityEditorAssembly.Location);
    }

    [Test]
    [TestCaseSource("TestFiles")]
    public void TestDocumentationExampleScripts(string docXmlFile)
    {
        // Parse the xml and extract the scripts
        // foreach script example in our doc call TestCsharpScript
    }

    void TestCsharpScript(string scriptText)
    {
        // Check for errors
        CompilerResults compilerResults = m_DomProvider.CompileAssemblyFromSource(m_CompilerParams, scriptText);
 
        string errors = "";
        if (compilerResults.Errors.HasErrors)
        {
            foreach (CompilerError compilerError in compilerResults.Errors)
            {
                errors += compilerError.ToString() + "\n";
            }
        }
        Assert.IsFalse(compilerResults.Errors.HasErrors, errors);
    }
}

And it worked! We needed to make some small changes in how we compile the examples, though, as some scripts are designed to go together as a larger example. To check for this we compiled them separately; if we found an error, we then compiled them again combined to see if that worked.

Some examples are written as single lines of code which are not wrapped in a class or function. We could fix this by wrapping them in our test, but we have a rule that all examples should compile standalone (i.e. if a user copies and pastes it into a new file it should compile and work), so we count those examples as test failures.

The test was now in a state where it could be run as part of our build verification on the path to trunk. However there was one small problem: the test took 30 minutes to run. This is far too long for a test running in build verification, considering we run around 7000 builds a day.

The test was running sequentially, one script after another, but there was no reason we could not run them in parallel as the tests were independent of each other and did not need to make any calls to the Unity API;and we are only testing that they compile, not the behaviour. Introducing ThreadPool, a .NET API that can be used to execute tasks in parallel. We push the tests as individual tasks into the ThreadPool and they will be executed as soon as a thread becomes available. This needs to be driven from a single function, meaning that we can’t have individual NUnit test cases for testing specific examples from the docs. As a result we lose the ability to run any one of the tests individually, but we gain the ability to run them all quickly.

[Test]
public void ScriptVerificationCSharp()
{
    // Setup. Start all tests running on multiple threads.
    s_ThreadEvents = new ManualResetEvent[s_DocInfo.Count];

    for (int i = 0; i < s_DocInfo.Count; ++i)
    {
        // Queue this example up for testing
        s_ThreadEvents[i] = new ManualResetEvent(false);
        ThreadPool.QueueUserWorkItem(TestDocumentationExampleScriptsThreaded, i);
    }
    // Check for errors and build the error output if required.
    bool testFailed = false;
    StringBuilder results = new StringBuilder();
    for (int i = 0; i < s_ThreadEvents.Length; ++i)
    {
        // Wait for the test to finish.
        s_ThreadEvents[i].WaitOne();

        if (s_DocInfo[i].status == TestStatus.Failed)
        {
            testFailed = true;
            GenerateFailureMessage(results, s_DocInfo[i]);
        }
    }

    // If a single item has failed then the test is considered a failure.
    Assert.IsFalse(testFailed, results.ToString());
}

public static void TestDocumentationExampleScriptsThreaded(object o)
{
    var infoIdx = (int)o;
    var info = s_DocInfo[infoIdx];
    try
    {
        TestScriptsCompile(info);
    }
    catch (Exception e)
    {
        info.status = TestStatus.Failed;
        info.testRunnerFailure = e.ToString();
    }
    finally
    {
        s_ThreadEvents[infoIdx].Set();
    }
}

This took the test time from 30 minutes to 2, which is fine for running as part of our build verification.

Since we couldn’t test individual examples with NUnit any more, we added a button to the scripting doc editor to allow developers to test the examples as they write them. The script with an error is now colored red when the test is run and error messages are displayed beneath.

When the test was first run we had 326 failures which I whitelisted (so they could be fixed at a later date). We now have that down to 32, of which most are failures in the test runner mainly due to not having access to some specific assemblies. There have been no new issues introduced and we can rest assured that when we deprecate parts of the API the test will fail and we can then update the example to use the new API.

Overall I thought this was an interesting use of the Editor Test Runner. It does have some limitations: We only test C# examples, and I have not managed to get JS compilation working, although that won't be an issue in the future.

Here is the full test.

using System;
using System.CodeDom.Compiler;
using UnityEngine;
using NUnit.Framework;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading;
using System.Xml;
using Microsoft.CSharp;
using UnderlyingModel;
using UnityEditor;

public class ScriptVerification
{
    const string k_PathToApiDocs = @"../../../../Documentation/ApiDocs/";
    const string k_FileExtension = ".mem.xml";
    const string k_WhiteList = "Assets/Editor/ScriptVerificationWhiteList.txt";

    public enum TestStatus
    {
        Unknown,     // Nothing has been done to this test yet.
        Ignored,     // Tests are ignored if they contain no example code
        Failed,      // The test failed to compile one or more of the examples.
        Passed,      // All examples were compiled successfully.
        Whitelisted, // Test was ignored as the member is in the white list file.
    }

    public class ExampleScript
    {
        public string code;
        public CompilerResults compileResults;
    }

    public class ScriptingDocMember
    {
        public TestStatus status = TestStatus.Unknown;

        // Information on the test and the xml file it can be found in.
        public string path;
        public string parent;
        public string name;
        public string nspace;
        public bool editor;

        public List<ExampleScript> csharpExamples = new List<ExampleScript>();

        // If we fail to compile multiple examples we also attempt to compile them as a single example.
        public CompilerResults combinedResults;

        // Error message if something caused the test runner to fail.
        public string testRunnerFailure;
    }

    static List<ScriptingDocMember> s_DocInfo;
    static ManualResetEvent[] s_ThreadEvents;

    [SetUp]
    public void SetupScriptVerification()
    {
        // Parse the scripting doc files and prepare the test data.
        string path = k_PathToApiDocs;
        if (!path.Contains(":"))
        {
            path = Application.dataPath + "/" + k_PathToApiDocs;
        }
        var files = Directory.GetFiles(path, "*" + k_FileExtension, SearchOption.AllDirectories);

        s_DocInfo = new List<ScriptingDocMember>();
        var whiteList = GetWhiteList();

        for (int i = 0; i < files.Length; ++i)
        {
            var xml = new XmlDocument();
            xml.Load(files[i]);
            XmlNode xmlheader = xml.FirstChild;
            XmlNode docsheader = xmlheader.NextSibling;
            XmlNode namespaceTag = docsheader.FirstChild;
            ParseMemberNode(namespaceTag, files[i], "", s_DocInfo, whiteList);
        }
    }

    [Test]
    public void ScriptVerificationCSharp()
    {
        // Setup. Start all tests running on multiple threads.
        // This gets the test time down from 30 minutes to around 2 minutes.
        s_ThreadEvents = new ManualResetEvent[s_DocInfo.Count];
        for (int i = 0; i < s_DocInfo.Count; ++i)
        {
            if (s_DocInfo[i].csharpExamples.Count == 0)
            {
                // Ignore items with no examples
                s_ThreadEvents[i] = new ManualResetEvent(true);
                s_DocInfo[i].status = TestStatus.Ignored;
            }
            else if (s_DocInfo[i].status == TestStatus.Whitelisted)
            {
                // Skip white listed items
                s_ThreadEvents[i] = new ManualResetEvent(true);
            }
            else
            {
                // Queue this example up for testing
                s_ThreadEvents[i] = new ManualResetEvent(false);
                ThreadPool.QueueUserWorkItem(TestDocumentationExampleScriptsThreaded, i);
            }
        }

        // Check for errors and build the error output if required.
        bool testFailed = false;
        StringBuilder results = new StringBuilder();
        for (int i = 0; i < s_ThreadEvents.Length; ++i)
        {
            s_ThreadEvents[i].WaitOne();

            if (s_DocInfo[i].status == TestStatus.Failed)
            {
                testFailed = true;
                GenerateFailureMessage(results, s_DocInfo[i]);
            }
        }

        // If a single item has failed then the test is considered a failure.
        Assert.IsFalse(testFailed, results.ToString());
    }

    static void GenerateFailureMessage(StringBuilder output, ScriptingDocMember info)
    {
        output.AppendLine(new string('-', 100));
        output.AppendLine("Name: " + info.name);
        output.AppendLine("Path: " + info.path + "\n");

        // Print out the example scripts along with their errors.
        for (int i = 0; i < info.csharpExamples.Count; ++i)
        {
            var example = info.csharpExamples[i];
            if (example.compileResults != null && example.compileResults.Errors.HasErrors)
            {
                output.AppendLine("Example Script " + i + ":\n");

                // Add line numbers
                var lines = example.code.SplitLines();
                int lineNumber = 0;
                int startLine = 0;

                // Find the first line of code so the line numbers align correctly.
                // The compiler will ignore any empty lines at the start.
                for (; startLine < lines.Length; ++startLine)
                {
                    if (string.IsNullOrEmpty(lines[startLine]))
                        startLine++;
                    else
                        break;
                }

                for (; startLine < lines.Length; ++startLine)
                {
                    // Does this line contain an error?
                    string lineMarker = " ";
                    foreach (CompilerError compileResultsError in example.compileResults.Errors)
                    {
                        // Add a mark to indicate this line has a reported error.
                        if (compileResultsError.Line == lineNumber)
                        {
                            lineMarker = "-";
                            break;
                        }
                    }

                    output.AppendFormat("{0}{1:000} | {2}\n", lineMarker, lineNumber++, lines[startLine]);
                }

                output.Append("\n\n");
                output.AppendLine(ErrorMessagesToString(example.compileResults));
            }
        }

        if (info.combinedResults != null)
        {
            output.AppendLine("Combined Example Scripts:\n");
            output.AppendLine(ErrorMessagesToString(info.combinedResults));
        }

        if (!string.IsNullOrEmpty(info.testRunnerFailure))
        {
            output.AppendLine("Test Runner Failure: " + info.testRunnerFailure);
        }
    }

    // Concatenates all the errors into a formated list.
    public static string ErrorMessagesToString(CompilerResults cr)
    {
        string errorMessages = "";
        foreach (CompilerError compilerError in cr.Errors)
        {
            errorMessages += string.Format("{0}({1},{2}): {3}\n", compilerError.ErrorNumber, compilerError.Line, compilerError.Column, compilerError.ErrorText);
        }
        return errorMessages;
    }

    public static void TestDocumentationExampleScriptsThreaded(object o)
    {
        var infoIdx = (int)o;
        var info = s_DocInfo[infoIdx];
        try
        {
            TestScriptsCompile(info);
        }
        catch (Exception e)
        {
            info.status = TestStatus.Failed;
            info.testRunnerFailure = e.ToString();
        }
        finally
        {
            s_ThreadEvents[infoIdx].Set();
        }
    }

    // Tests all scripts compile for the selected scripting member.
    // First attempts to compile all scripts separately, if this fails then compiles them combined as a single example.
    public static void TestScriptsCompile(ScriptingDocMember info)
    {
        var scripts = info.csharpExamples;
        if (scripts.Count == 0)
        {
            info.status = TestStatus.Ignored;
            return;
        }

        // Setup compiler
        var providerOptions = new Dictionary<string, string>();
        providerOptions.Add("CompilerVersion", "v3.5");
        var domProvider = new CSharpCodeProvider(providerOptions);
        var compilerParams = new CompilerParameters
        {
            GenerateExecutable = false,
            GenerateInMemory = false,
            TreatWarningsAsErrors = false,
        };
        Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies();
        foreach (var assembly in assemblies)
        {
            compilerParams.ReferencedAssemblies.Add(assembly.Location);
        }

        // Attempt to compile the scripts separately.
        bool error = false;
        for (int i = 0; i < scripts.Count; i++)
        {
            scripts[i].compileResults = domProvider.CompileAssemblyFromSource(compilerParams, scripts[i].code);
            if (scripts[i].compileResults.Errors.HasErrors)
                error = true;
        }

        if (error)
        {
            // Its possible that the scripts are all part of one example so we should compile them together and see if that works instead.
            info.combinedResults = domProvider.CompileAssemblyFromSource(compilerParams, scripts.Select(s => s.code).ToArray());
            if (!info.combinedResults.Errors.HasErrors)
                error = false;
        }

        info.status = error ? TestStatus.Failed : TestStatus.Passed;
    }

    static HashSet<string> GetWhiteList()
    {
        var textAsset = AssetDatabase.LoadAssetAtPath(k_WhiteList, typeof(TextAsset)) as TextAsset;
        var whiteList = new HashSet<string>();
        if (textAsset)
        {
            foreach (var line in textAsset.text.Split('\n'))
            {
                whiteList.Add(line.Replace("\r", "").TrimEnd(' '));
            }
        }
        return whiteList;
    }

    // Parses the scripting docs and generates our test data.
    static void ParseMemberNode(XmlNode node, string file, string parent, List<ScriptingDocMember> infoList, HashSet<string> whiteList)
    {
        ScriptingDocMember info = new ScriptingDocMember();
        info.path = file;
        infoList.Add(info);
        info.parent = parent;
        foreach (XmlAttribute attr in node.Attributes)
        {
            // potential tag attributes: name, namespace, type
            var attrLowercase = attr.Name.ToLower();
            if (attrLowercase == "name") info.name = attr.Value;
            else if (attrLowercase == "namespace") info.nspace = attr.Value;
        }

        if (whiteList.Contains(info.name))
            info.status = TestStatus.Whitelisted;

        if (!string.IsNullOrEmpty(info.nspace))
        {
            // trim down the namespace to remove UnityEngine and UnityEditor
            if (info.nspace.StartsWith("UnityEngine"))
            {
                info.editor = false;
                info.nspace = info.nspace.Remove(0, "UnityEngine".Length);
            }
            if (info.nspace.StartsWith("UnityEditor"))
            {
                info.editor = true;
                info.nspace = info.nspace.Remove(0, "UnityEditor".Length);
            }
            if (info.nspace.StartsWith("."))
                info.nspace = info.nspace.Remove(0, 1);
        }

        foreach (XmlNode child in node.ChildNodes)
        {
            var childNameLowercase = child.Name.ToLower();
            if (childNameLowercase == "section")
            {
                // see if this section is undoc
                for (int i = 0; i < child.Attributes.Count; i++)
                {
                    if (child.Attributes[i].Name == "undoc" && child.Attributes[i].Value == "true")
                    {
                        infoList.Remove(info);
                        break;
                    }
                }

                foreach (XmlNode grandChild in child.ChildNodes)
                {
                    var codeLangNode = GetExample(grandChild);
                    if (codeLangNode != null)
                    {
                        var scriptInfo = new ExampleScript();
                        scriptInfo.code = codeLangNode.InnerXml.Replace("<![CDATA[", "").Replace("]]>", "");
                        info.csharpExamples.Add(scriptInfo);
                    }
                }
            }
            else if (childNameLowercase == "member")
            {
                ParseMemberNode(child, file, info.name, infoList, whiteList);
            }
        }
    }

    // Extract the cs example code.
    static XmlNode GetExample(XmlNode node)
    {
        if (node.Name.ToLower() == "example")
        {
            for (int i = 0; i < node.Attributes.Count; ++i)
            {
                if (node.Attributes[i].Name == "nocheck" && node.Attributes[i].Value == "true")
                    return null;
            }
            return node.SelectSingleNode("code[@lang='cs']");
        }
        return null;
    }
}
August 18, 2017 in Technology | 14 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered