Wednesday , July 17 2019
Home / Uncategorized / Getting started – Unity Blog

Getting started – Unity Blog



As a Unity developer, you want your users to love playing your games, enjoying a smooth experience on all the platforms they can play on. What if I told you we made it easier to create performance benchmarks? If you want to learn how to develop Unity games or tools with an eye to performance, read on!

In this post, I explain how to use a couple of Unity tools that offer you an easy way to start collecting performance metrics and creating benchmarks with them: the Unity Test Runner provided with Unity Editor, Unity Performance Testing Extension and Unity Performance Benchmark Reporter.

Why reference performance in units?

As a Unity developer, you may find yourself in the following situation: your project was working quickly and smoothly not so long ago, but then one or more edits came, and now the scenes are remarkably slow, the frames are falling and other performance problems they started to tick. Keeping track of changes that have led to performance regression can be difficult.

If you're a Unity partner, you want to understand performance changes on SDKs, drivers, platforms, packages, or other artifacts. Or you would like to collect performance metrics in different versions of Unity with your products, but it is not very clear how to do it and then make comparisons.

These are just a couple of examples where establishing performance benchmarks can really save the situation. Now let me show you how you can start collecting performance metrics, create benchmarks with them, and view changes in performance metrics.

Download the sample project

For this discussion, we will review the test code in the UnityPerformanceBenchmark sample performance test project.

Download the latest version of XRAutomatedTests from GitHub. You will find the UnityPerformanceBenchmark project in the PerformanceTests subdirectory.

Writing performance tests in Unity Test Runner

The UnityPerformanceBenchmark project contains a variety of sample scenes used in the Unity Performance Tests using the Unity Performance Testing extension.

The first thing we will do is take a look at how we write the performance test using Unity Test Runner with the Unity Performance Testing Extension. Here is some basic information on both of these tools before proceeding.

Unity Test Runner

We are using Unity Test Runner to perform our performance tests. Unity Test Runner is a test execution framework built into Unity Editor that lets you test code in Edit and Play mode on target platform readers such as Standalone, Android or iOS. If you are not familiar with Unity Test Runner, consult the Unity Test Runner documentation.

Unit performance test extension

The Unity Performance Testing extension is a Unity Editor package that provides an API and test case attributes that allow you to sample and aggregate both Unity Profiler markers and non-profiler custom metrics, Unity Editor and players. You can learn more by verifying the Unity Performance Testing Extension documentation, but here we will take a look at some examples.

The extension of the Unity performance test requires Unity 2018.1 or later. Be sure to use Unity version 2018.1 or later if you want to run sample performance tests in the UnityPerformanceBenchmark project or whenever you use the Unity Performance test extension.

Open the sample project using the command line

The UnityPerformanceBenchmark project implements the IPrebuildSetup interface, a Unity Test Runner function, in which it is possible to implement an installation method that is automatically called before the execution of tests by Unity Test Runner.

The first thing that the IPrebuildSetup.Setup method of the UnityPerformanceBenchmark project does is to analyze command line arguments for player build settings. This allows us to flexibly build the reader for our performance tests using the same Unity project on different platforms, render threading modes, player graphics APIs, scripting implementations and XR-enabled settings such as stereo rendering path and SDK VR.

Therefore, you will need to open the UnityPerformanceBenchmark project with Unity from the command line, passing the player creation options we want to use when we run the tests in Unity Test Runner.

Example: launch the UnityPerformanceBenchmark project from Windows to create an Android player:

Here we launch Unity on Windows to build for Android with OpenGLES3 graphics API, multithreading rendering and mono scripting backend.

Example: Launch the UnityPerformanceBenchmark project from OSX to build iOS Player

Here we launch Unity on OSX to build for iOS with OpenGLES3 graphics API, multithreading rendering and mono scripting backend. We also provide the Apple developer team and the provisioning profile information needed for deployment on an iOS device.

When we open the UnityPerformanceBenchmark project with Unity from the command line as in the previous examples, the command line arguments will be in memory for the IPrebuildSetup.Setup method to be analyzed and used to create the player with.

Although this command-line startup approach is not required to run tests in Unity Test Runner, it is a good model to use to avoid the use of a separate test project for each player configuration.

I have detailed the command line options for opening the project, or simply running the tests, from the command line on the test project wiki: How to run the Unity Performance benchmark tests. For more information on how we are analyzing player generation settings in the test project, take a look at the RenderPerformancePrebuildStep.cs file in the Scripts directory of the UnityPerformanceBenchmark test project.

Open the Runner test window

After opening UnityPerformanceBenchmark, we need to open the Unity Test Runner window in Unity Editor

  • in Unity 2018.1, go to Window> Test Runner.
  • in Unity 2018.2, go to Window> General> Test Runner.

The Unity Test Runner window will open and display the following image.

Unity Test Runner with test.

These are our Unity Performance tests. We can execute them in the Editor using the Play button at the top left of the window, or on the current device or platform using the "Run all in player" button in the upper right part of the window.

If you want to debug the code in the IPrebuildSetup.Setup method

  • Set breakpoints in your IPrebuildSetup. Activation code in Visual Studio
  • Connect to Unity Editor with the Visual Studio Tool for the Unity extension
  • Run the tests in the Editor using the "Execute all" or "Execute selection" button in the Unity Test Runner window.

At this point, the Visual Studio debugger will be inserted into the code where you can debug as needed.

Suggestion for debugging

If you want to debug the code in the IPrebuildSetup.Setup method

  1. Set breakpoints in your IPrebuildSetup. Activation code in Visual Studio
  2. Connect to Unity Editor with the Visual Studio Tool for the Unity extension
  3. Run the tests in the Editor using the "Execute all" or "Execute selection" button in the Unity Test Runner window.

At this point, the Visual Studio debugger will be inserted into the code where you can debug as needed.

Example of unit performance testing

Let's take a look at an example of performance testing to get a better understanding of how it works.

Example: Sampling Marker Profiler in an Unity Performance Test

In this example, our test method is called SpiralFlame_RenderPerformance. We know from the decorator of the method [PerformanceUnityTest], this is an Unity Performance Test.

All tests in the UnityPerformanceBenchmark test project follow the same pattern we see in this test method:

  1. Load the scene for the test
  2. Set the scene as active so that we can interact with it in the test method
  3. Create a test object of type DynamicRenderPerformanceMonoBehaviourTest and add it to the test scene (this happens in SetupPerfTest method)
  4. Wait for a constant amount of time for the scene to "stabilize" after loading and adding the test object to the scene before starting the sample metrics.
  5. Set up our profiler markers for acquisition via the Performance Test Extension API
  6. Let the performance test know that we are ready to start capturing the metrics
  7. Then return the test object (an IMonoBehaviourTest) to capture the metrics during the rendering cycle.

We also analyze custom metrics (metrics that do not fit into one of the Unity Profiler markers, framecount, or execution time) in the RenderPerformanceMonohaveurTestBase base class (this class inherits from MonoBehaviour).

Example: sampling custom metrics in a Monobehaviour script




In the example above, we acquire FPS, GpuTimeLastFrame (if XR is enabled) and the application start time (if Unity Analytics is enabled and we are running on Unity 2018.2 or later where API is available which we need).

IsTestFinished property

Finally, note in the same base class RenderPerformanceMonoBehaviourTestBase that we have implemented a public bool IsTestFinished property. You need to implement this property because our RenderPerformanceMonoBehaviourTestBase implements the IMonoBehaviourTest interface.

This property is important because the Unity Test Runner uses it to know when to stop the test. When the value is true, the test ends. It's up to you to implement the logic you want to determine when Unity Test Runner should stop the test.

Example: Sampling of custom metrics in the IsTestFinished property

In this last example, we are capturing the number of game objects, triangles and vertices rendered in the scene at the end of the test.

SampleGroupDefinition

Now that we have seen some examples of how we make calls in the Performance Testing extension to sample the metrics, let's talk about how we set them up at the start.

Measure methods. * They generally take a struct as a parameter called SampleGroupDefinition. When we create a new SampleGroupDefinition we define some properties for the samples that interest us.

Example: Defining a new SampleGroupDefinition for GpuTimeLastFrame, using Milliseconds as sample units, aggregating samples using a minimum value

Following is the SampleGroupDefinition for GpuTimeLastFrame. In this way we allow Performance Testing Extension to know how to collect samples and aggregate them for GpuTimeLastFrame.

This SampleGroupDefinition comes from the example of the dynamic scene rendering performance test, so here we chose to aggregate our samples using the minimum collected value. But why should we do it rather than use a more common aggregation measure, such as the median or the average?

The answer is because the scene is dynamic. In a dynamic scene that uses an average or a media aggregation would be unreliable or inconsistent for the same scene performed on the same code given the changing nature of the rendering. This is probably the best we can do if we want to track a single aggregate for a rendering metric in a dynamic scene. However, when we define a similar SampleGroupDefinition for our static scenes, we will certainly use a median aggregation.

Example: Defining new SampleGroupDefinition for FPS, using none as sample units, aggregating samples using a median value, increasing the value is better

Below is the SampleGroupDefinition for FPS (Frames Per Second). FPS does not have a separate unit of measurement; it's just FPS, so we specify SampleUnit.Not here. We will use a type of median aggregation here; this is in a static scene so we do not have to worry about an unpredictable rendering experience. We are explicitly establishing a 15% threshold for the sample group and passing true to the IsBetter increase argument because, if FPS increases, it's a good thing!

These last two topics are collected and saved in the .xml file of the performance test results when they are run from the command line and can then be used in Unity Performance Benchmark Reporter to establish benchmarks.

At the end of the test, all previously enabled metric samples are then aggregated by the performance test extension.

Types of measurement

I want to emphasize that in our code examples we use a couple of different Unity Performance Testing Extension APIs, that is

  • Measure.ProfilerMarkers e
  • Measure.Custom

The Unity Performance Testing extension also provides other Measure methods to suit your specific needs depending on what and how you want to measure Unity performance. These additional methods include:

  • Measure.Method
  • Measure.Frames
  • Measure.Scope
  • Measure.FrameTimes

More information on the different measurement methods in the documentation on the extension of Unity's performance tests, in particular in the section "Taking Measurements".

Performance testing in Unity Test Runner

Now that we've reviewed some examples of how we write performance tests using Unity Test Runner using Unity's performance testing extension, let's look at how we run them.

There are two main ways we can perform our performance tests

  1. From the command line, launching Unity with the -runTests option. This is the preferred method for performance testing because the Unity performance test extension will generate for us a .xml file that we can use in Unity Performance Benchmark Reporter to view and compare our results.
  2. Directly from the Editor. This is a useful approach if you
    a. I just want to run the tests and display the results in the Unity Test Runner window without the need to capture the results for later use, or
    b. You want to verify that tests are performed or that you need to debug in the test code.

Performance testing with the -runTests command-line option

Here are two examples of how to run performance tests with Unity Test Runner from the command line. These examples should seem very familiar, because we are rebuilding the same examples we saw earlier in our discussion on opening the UnityPerformanceBenchmark project from the command line.

Example: Running UnityPerformanceBenchmark performance tests from Windows on an Android player

Here we launch Unity on Windows to build for Android with OpenGLES3 graphics API, multithreading rendering and mono scripting backend.

Example: Running UnityPerformanceBenchmark Testing performance from OSX on an iOS player

Here we launch Unity on OSX to build for iOS with OpenGLES3 graphics API, multithreading rendering and mono scripting backend. We also provide the Apple developer team and the provisioning profile information needed for deployment on an iOS device.

For both of these examples, we have introduced three to four new command-line options that will help us perform our tests instead of opening the Unity editor with the command line arguments available for the IPrebuildSetup.Setup method.

-runTests
This option tells Unity Test Runner that you want to run your tests

– results of the test
This option specifies the file name and path to the .xml file on which Unity Test Runner should save performance test results.

-logfile
This option specifies the file name and path of the file on which the Unity editor should write the registration. This option is optional, but can be very useful when examining errors and problems if you can quickly access the Unity Editor log file.

-batchmode
This option forces Unity's editor to open in headless mode. We use this option when we are only testing the reader's performance and it is not necessary to actually open the Unity Editor window. This can save time during the automatic test run. When this option is not used, the Unity editor will open on the screen before running the tests.

At Unity we perform our command line performance tests, often in batch mode, in our continuous integration system.

Example: Running the UnityPerformanceBenchmark tests from the command line

Performance testing in the Unity editor

With the Unity Test Runner window open at the top when PlayMode is selected (the PlayMode tests are performed both in the build player and in the playmode window of the Editor), we have

  1. Run All: Click this button to run all the tests on the PlayMode tab
  2. Run Selected: Click this button to run the selected test or node and all the tests below.
  3. Run all in player: click on this to make Unity Editor generate the type of player configured in the build settings and run the tests there
Important requirement
Performance testing in Unity Editor from the Test Runner window will not produce a required .xml file for Unity Performance Benchmark Reporter.

If you want to create a result .xml file after running the performance tests, you need to run the tests by starting Unity from the command line with the -runTests command-line option. However, keep in mind that when you run Unity with the -runTests command-line option, the Editor will open and start running the tests.

The resulting .xml files will contain the results and metadata of the test runs that we will use with Unity Performance Benchmark Reporter to create benchmark results and compare them with subsequent test runs.

Example: Performance testing in Unity Editor

View performance test results

If we perform these tests from the editor, the aggregate values ​​can be displayed at the bottom of the Unity Test Runner window by selecting each test.

Example: View sample aggregates of the performance test from Unity Test Runner

If you want to view the results of Unity Performance Tests from the command line, you must use Unity Performance Benchmark Reporter (or simply open the result .xml file, but it is not an easy read).

That said, let's talk about how we can use Unity Performance Benchmark Reporter to visualize and compare results.

Use of Unity Performance Benchmark Reporter

Unity Performance Benchmark Reporter enables comparison of the baselines of performance metrics and subsequent performance metrics (as generated using Unity Test Runner with Unity performance testing extension) in an HTML report with graphical views.

The reporter is built as a .NET Core 2.x assembly so that it is compatible to run on different supported .NET platforms (Windows, OSX, etc.). Therefore, to run it, you need to make sure you have installed the .NET Core 2.x SDK.

The performance of the Unity Performance Benchmark reporter involves calling the assembly with the dotnet command like this:

After the reporter runs, a directory named UnityPerformanceBenchmark will be created with an html report and with support for .css, .js and image files. Open the html report to view the performance metrics views captured in the .xml result files.

Command line options

-the results
The path to a directory with one or more non-base .xml result files to be included in the html report.

At least one value of a value must be passed to the UnityPerformanceBenchmarkReporter.dll assembly. This is the only mandatory field.

This command-line option can also be used to specify the path to a single non-base .xml result file. In addition, you can specify multiple directories or files by repeating the option like so:

-baseline
The path to a result .xml file that will be when you compare other results.

-reportdirpath
The path to a directory where the reporter will create the performance benchmark report. This is created in an UnityPerformanceBenchmark subdirectory.

If the report path is not specified, the UnityPerformanceBenchmark subdirectory will be created in the working directory where UnityPerformanceBenchmarkReporter.dll was invoked.

Confronto tra i risultati dei test delle prestazioni

Confrontiamo alcuni risultati dei test delle prestazioni con Performance Benchmark Reporter.

Esempio: sperimentare modifiche di configurazione in una scena Gear VR abilitata per VR per migliorare la frequenza dei fotogrammi

Ho una scena Unity con le seguenti caratteristiche di complessità.

  • 732 oggetti
  • 95,898 triangoli
  • 69.740 vertici

La nostra scena Gear VR

Ho eseguito un Unity Performance Test contro questa metrica di campionamento della scena che mi avrebbe aiutato a capire se potevo sostenere circa 60 FPS utilizzando il rendering Stereo Multi Pass. Successivamente, ho eseguito il Benchmark delle prestazioni con i risultati del mio test.

Quello che ho scoperto è che il mio FPS è più vicino a 30 FPS, metà di quello che mi piacerebbe essere.

Successivamente, cercherò di utilizzare il rendering stereo multiview Single Pass per vedere quanto posso ottenere circa 60 FPS. Eseguirò il mio test delle prestazioni con la modifica della configurazione, quindi creerò un altro rapporto sul benchmark delle prestazioni di Unity confrontando i miei primi risultati con quelli nuovi.

Risultati del passaggio da Multi Pass a Single Pass Multiview Stereo Rendering.

Sembra che lo switch di configurazione per il rendering Multiview Single Pass abbia migliorato il valore di FPS a 37. Dobbiamo comunque essere più vicini a 60 FPS se vogliamo che questa scena venga eseguita senza un significativo calo di frame su Gear VR.

L'ultima cosa che sto per sperimentare è la riduzione del numero di cubi rotanti nella mia scena per vedere se riusciamo ad ottenere FPS.

Dopo un paio di tentativi sono in grado di migliorare le prestazioni a ~ 55 FPS. Ma ho dovuto ridurre il numero di oggetti nella scena da 732 a 31. Questo è un bel po '.

Tornerò su altri miglioramenti che posso apportare per l'ottimizzazione delle prestazioni, ma per ora lo userò come linea di base FPS. Userò questo come il mio benchmark andando avanti, sperando di migliorarlo se posso.

Ottenere un FPS più accettabile per la scena VR.

Stabilire parametri di riferimento e tenere traccia delle modifiche delle prestazioni

Stabilire punti di riferimento può significare molte cose a seconda del progetto. In questo contesto, nell'esecuzione di test delle prestazioni in Unity, stiamo parlando di stabilire un insieme di risultati di base, un ultimo gruppo di metriche prestazionali noto che possiamo confrontare i risultati successivi con le modifiche apportate. Questi diventano il nostro punto di riferimento.

Nella sezione precedente sono arrivato a una configurazione che utilizzava il rendering stereo multivista Single Pass per Gear VR e un conteggio degli oggetti scena ridotto, che si traduceva in un FPS "accettabile". A quel punto, decido di utilizzare i risultati del test come mio punto di riferimento. Vediamo un esempio di come possiamo usare questo benchmark mentre apportiamo ulteriori modifiche alla configurazione del giocatore.

Esempio: utilizzare Benchmark delle prestazioni per rilevare la regressione delle prestazioni con le modifiche alla configurazione

Mi piacerebbe attivare l'antialiasing nella mia scena per appianare l'aspetto. Le impostazioni di qualità predefinite in Unity per Android disabilitano l'antialias, ma mi piacerebbe vedere se siamo in grado di abilitarlo e mantenere comunque un FPS accettabile per la nostra scena Gear VR.

Per prima cosa ho impostato il valore antialiasing nel mio metodo IPrebuildSetup.Setup su 4.

Quindi eseguo di nuovo il test delle prestazioni sul mio telefono Android abilitato per Gear VR. Quindi utilizzo l'Unity Performance Benchmark Reporter per confrontare questa nuova corsa con i risultati del mio benchmark appena stabilito.

Rilevamento della regressione in FPS Dopo la riconfigurazione per utilizzare l'antialias al 4.

But look, with the reconfiguration of my Unity player to use antialiasing at level 4, my FPS dropped to 32 FPS, which is about where I originally started out when I created this scene with 732 objects.

I’d like to experiment with a few lower antialiasing values to see if I can recover an acceptable FPS for the scene before I bail on this idea. So, I try with antialiasing set to 2, and then finally 1. The results are in the image below.

Experimenting with Decreasing Antialiasing Values To Recover Acceptable FPS For The Scene.

In this reconfiguration scenario, using the performance benchmark I established earlier, I was able to experiment with changes in my Unity player settings and then verify the performance impacts before committing to them.

Even though I’m within my default 15% threshold of variance for FPS using antialiasing set to 1, FPS is now at 49, a bit too far from the 60 FPS for my VR-enabled scene that I’d like to be at. I don’t think I’ll commit to these changes today.

Conclusion

Unity is putting a lot of focus on great performance by default. But the Unity Engine is only part of what ultimately results in users loving to play your games, enjoying a smooth and high performance experience across all the platforms they may play on. And SDKs, drivers, or Unity packages, for example, that work great without introducing performance regressions are critical to an overall great Performance experience for everyone.

I’ve introduced you to a couple of Unity tools that make it easier to start collecting performance metrics and creating benchmarks with them:  the Unity Performance Testing Extension, and the Unity Performance Benchmark Reporter. I encourage you to experiment with what they can do for you and your performance-focused efforts.

We looked at

  • How we can use the Unity Test Runner to write performance tests to sample profiler and other metrics,
  • Some different ways we can execute performance tests using the Unity Test Runner, and
  • How to use the Unity Performance Benchmark Reporter to analyze and compare performance metrics, run over run, as you begin to up your performance testing game.

Establishing baselines for these metrics, and using them to create a benchmark for your scenes, game, SDK, driver, package, or other Unity integrations can be an effective way to start creating visibility into impacts your changes have. Good luck!

Many thanks and credit go to my Unity colleagues for their help contributing, brainstorming, experimenting, developing, and iterating on this work with me.

  • Qi Jiang
  • Sakari Pitkänen
  • Gintautas Skersys
  • Benjamin Smith

Source link

Leave a Reply

Your email address will not be published.