UNITE 2015 Takeaways with Dr. Fredrick and Taylor Yust

Hailey Ray

November 18, 2015

In September, the Tesseract lead developers were able to attend UNITE 2015, the Unity 3D developer conference, in Boston from the 21st to the 24th. As always, there were some really amazing presentations, and a helpful glimpse into the future of Unity for the upcoming year or two. While there were some great sessions, and the bowling-pool hall party location was a big success, at the end of the blog we’ll present our own wish list for content and format we’d like to see in future UNITE conferences. Since no one of us could be present at all the sessions, and even between the five of us we did not hit them all, we’re each sharing our individual reactions to what we saw, as well as a wrap-up of what we took away as a group. Dr. David Fredrick, Studio Director; Perspective: As a director of a studio housed in a university, with a teaching/research mission, one key area of interest for me is the transition to WebGL. This interest was addressed at the Road Map session, and then again in more detail at a 60-minute session on Tuesday. While it was not the explicit focus of their presentation, John Radoff and Jason Booth of Disruptor Beam also provided extremely valuable insight into development strategies for WebGL. Synthesizing these, some high-level takeaways: Mozilla and Unity are coordinating efforts to minimize “dead space” in the handoff from the Unity web plugin to WebGL. While the Unity team could not provide a firm estimate for when the “beta” status will be lifted for publishing to WebGL, it does seem that significant progress is being made, especially with respect to minimizing the code footprint of builds and providing more efficient texture compression. However, the need to have a very compact initial package will remain, and in this regard Jason Booth’s outline of successful approaches to Asset Bundles was incredibly helpful. While most developers have avoided using Asset Bundles, Booth pointed out that it can be done, and for a game with very rich visual content like Disruptor Beam’s Star Trek, it is essential. This bears repeating: if you are working on a project for delivery through WebGL and its graphics load is significant (this would be most heritage visualization or game products), you will need to use Asset Bundles. Unlike DB’s Star Trek project, your primary platform may not be tablets and phones, but even if it’s for desktops using the web, easing initial start times and managing memory through Asset Bundles will likely be essential. You will need to organize your project accordingly (e.g. one project hierarchy to handle the game, and a second hierarchy to handle the Asset Bundles). Booth’s presentation is a great place to get an overview of this, and help shape your thinking about how to plan projects for WebGL and Asset Bundles. A second key area of interest for Tesseract and heritage visualization/gaming is getting all the performance we can out of the physically based rendering (PBR) pipeline in Unity 5. For this, the session on the dystopian future world game p.a.m.e.l.a. by NVYVE’s Adam Simonar was really helpful. p.a.m.e.l.a. is done entirely with real-time lighting, and for this Simonar suggested checking only your large, architectural assets as static, and handling all of your environmental clutter and furniture through light probes. He also suggested getting the most out of emissive surfaces, and setting the real-time GI resolution at 1-2 pixels per meter. He also noted that deferred reflection probes in 5.2 will help make these less expensive. A second key area of insight in this session was the discussion of post-processing effects, as Simonar explored the following sequence: Raw (the scene as rendered through PBR with light and reflection probes) + Eye Adaptation (SCION post-processing) + Tone Mapping + Vignette (focus toward the middle, reduce eye fatigue) + Distortion and Grain + SSAO (Sonic Ether) + SSR (screen space reflection, from Candela) + Bloom (SCION) + Color Grading IFrame Simonar gave a great demonstration of what happens when you selectively turn off or tweak these effects, and emphasized that it’s important not to lean on any one of these exclusively, but to use them subtly to shape the overall feel of the environment. These two talks exemplify what we look forward to from UNITE, as they effectively bridged the distance from immediate, hands-on help with specific techniques to provocative thinking about composition and visual storytelling at a high level. Taylor Yust, Programming Lead; Perspective: Project Tango A couple years ago I was involved in developing an augmented reality (AR) app for museums as part of a class project. I was using Unity and Qualcomm’s Vuforia software to accomplish this, but it was rough, using pre-defined reference images in a database to determine both the user’s relative position and what information to display. Around the same time, Google announced Project Tango, an Android platform that uses cameras and infrared depth sensors to build a 3D model of the environment around the user. At the time, Tango development kits were limited in availability, but I remember thinking how useful it would have been for my project since the models and positioning data could be much more accurate than the image recognition I was using. Fast-forward to Unite 2015, and Google was present in full force at their Project Tango booth. Tango now fully supports Unity with its own supported SDK, and Google had demos showing how Tango can be used for incredibly accurate AR games and applications. As part of a promotion, Google was also distributing free (!!!) Tango kits to attendees in limited quantities. Some of the Tesseract team members (including yours truly) managed to snag a copy before they ran out. I’ve only just started playing with my copy, but I’m really excited to see what can be done! Perhaps I could revisit AR and see what types of applications exist (particularly for games)? Or maybe we could use Tango to scan historical busts in order to model in-game historical figures? We’ll see what happens! Project Management and Continuous Integration While the subjects of project management and continuous integration might not sound sexy, they can be some of the most important when it comes to actually finishing and shipping products as complex as video game software. One of the sessions I attended at Unite walked through some of the most important elements of successful game development. It was nice because the talk confirmed that, for the most part, Tesseract has implemented and integrated the key practices of good project management. For example, we have effective version control to ensure project changes don’t inadvertently break the game, and we utilize agile software development techniques to keep team members organized and moving towards a shippable product. However, a number of items stuck out to me that I would like to investigate in the near future:
  • Bug Tracking – Currently, when there’s a bug in the game, the team member that finds it will usually just inform me in a casual conversation. While this was tenable in the past, it obviously doesn’t scale well with larger projects. It can be difficult for me to mentally juggle these issues until they can be addressed, and I often cannot recreate them easily. If we were to implement formalized bug tracking software that includes information on where and how the bugs occur, it could greatly expedite development.
  • Test Automation – It’s not uncommon for a bug “fix” to actually introduce more issues than it resolves. Sometimes these new bugs aren’t discovered until later down the line, which can complicate things when new scripts or levels were built on top of the faulty code. Unity now provides a package of tools that can automate the execution of defined test cases, ensuring bug fixes and new features don’t break existing functionality.
  • Cloud Build – Making a “build” (i.e. compiling an entire game project into an executable application for an end user) can be an enormously time-consuming process. After working a late night into the wee hours of the morning, the last thing one of us wants to do is spend upwards of an hour making builds. Unity now offers Unity Cloud Build to automate this process by making the build on Unity’s own servers, freeing us up to continue working (or go to sleep...). Having regular, automated builds also makes it easier for us to send the latest version of the game to playtesters for feedback.
Come back next week to hear from our Art Lead, Design and Narrative Lead, and Technical Director about their experiences at UNITE 2015!

Hailey Ray

November 18, 2015