Game Engines
Past, Present and Future

By Kevin Normann

There are some very interesting developments happening in our industry today, particularly around video game engine advances that I am eager to talk about. But I am not going to. Well, not yet anyway… Instead I will practice some self-discipline, or “Zen” as my college karate instructor once taught me, and first reflect on why I, a 22+ year industry “veteran”, find myself genuinely excited and encouraged by these new developments. To fully understand why this is meaningful, we need to talk a little about the history of video game engine development. I hope my readers will find this personal reflection both familiar and useful in understanding why we should be optimistic about the future of the craft that we love so much.

A brief history

For some of us developers in the video game industry, there have been many opportunities to work on new and interesting games. In the early days, it was all about the games, but as hardware grew in capability and games grew in complexity, more and more time was spent wrestling with the technology behind the games. This lead to a phase in the 90's and 2000's where companies with successful products and who were particularly proud of their technical achievements would license their engines out to other studios to be used on other products. It was a natural progression and it allowed many game developers to stay focused on the most important part of our jobs - the games.

In the fall of 2004 the industry received word of Xbox360 and PS3. Video game hardware took another leap in capability by moving strongly to multi-core processors requiring game engineers to learn a new term - "concurrency". Instantly, existing video game engines on the market became outdated. In fact none of the popular engines or even latest tech-engines were architected to perform well on the new hardware, let alone push the hardware to its limits. We used to joke at the time that software efficiency was going to be measured in "Gigi-NOPS" meaning billions of "no operations" per second, because there was no way to keep all those processors busy all the time. In fact, it took over half a decade from their initial release before any title was published that arguably demonstrated the capability of either of these machines. But while the industry labored all that time to catch up to the hardware performance of the day, the hardware manufacturers pushed performance forward.

Software has fallen behind

Today, we are still seeing hardware capability growing tremendously through the use of more and more processors. We see this happening not just related to the main CPU, but also to the graphics cards (GPUs) and other processors and co-processors such as dedicated physics chips, video compression, etc. Even phones today commonly have 4 processors and will likely follow along the same path as laptops and desktops. All of this processing power growth through increased processor counts, and yet the software engines used to drive this technology have not kept up.

There are many reasons why software technology has fallen behind and I think they are rather obvious when you think about it.

  1. The primary reason why game engines did not keep up with hardware performance and capability since 2009 is because the engine developers did not feel the pressure to do so from the industry. Why did they not feel the pressure? Because, the industry was busy with an influx of new, casual, gamers to the market driven by new approachable and personal technology (i.e. mobile devices and web/Facebook) which shifted the focus to casual/social experiences over visually stunning ones.
  2. Additionally, new monetization models such as free to play, freemium and in app currencies took a lot of attention by developers and it turns out the studios that were traditionally good at high end game engine development were usually some of the slowest to successfully incorporate the new models or adapt to new consumer expectations which lead to many closings and loss of revenue that resulted in less high-end innovation.
  3. The few developers working on hard core games (usually those working on sequels to existing popular franchises) were happy to make modest improvements within safe parameters. Less competition meant less reason to push for revolutionary ideas. General hardware speed up could be enjoyed without taking on the challenge of approaching the capability of new hardware. In short, game developers left it to game engine middleware to adapt for new hardware capabilities, but the game engine middleware companies were busy focusing on addressing the needs of the growing number of casual game developers.

Overall the challenges in meeting the needs of a growing number of new and diverse platforms, new and diverse gamers and new monetization and marketing strategies and new and innovative game play styles has kept the software arm of the video game industry quite busy on everything except hardware performance and capability. Meanwhile, over this same time, the hardware manufacturers have continued to push capability forward as they have always done, widening the gap between the actual capability of the hardware and the capability that the software can efficiently tap into. I am sure this phase has been frustrating for hardware manufacturers who are keenly aware of their accomplishments over this time and that most of the rest of us are not.

Where does that leave us?

The primary point that I made above is that consumers spent many years exploring games that didn’t require software to evolve toward greater technical capability. So where does that leave us? Is it the case that gamers are happy with today’s experiences and are no longer interested in seeing technical limits pushed? Have we now produced the games that gamers have always wanted and now they will be finally content to only play modest iterations of what they’ve already seen? Some might think so if one was only watching the space for 3 years and watched the growing success of games like Minecraft. But, veterans of the industry can tell you that while we have had some interesting and distracting new toys to play with in recent years, the same root force continues to affect change in our industry as it has always done. Specifically, the hunger for new experiences will always drive developers to push the technology to revolutionize customer experiences.

Today there is a lot of talk about virtual reality and augmented reality, both of which require new levels of processing and new ways of engineering to pull off well. Games are also growing in size and complexity again from the casual "softcore" games of 2009 that moved into the "mid-core" games of 2012 are now bringing all of those new consumers to a level of sophistication that will be harder to distinguish from traditional hard core gamers. The grandma of tomorrow might be as “hard-core” about their games as her granddaughter and grandson are today! Today the video game market is larger than ever and these gamers are starting to grow hungry for the next "new" thing. Many of them may have started as casual gamers, but are now plugged in and ready to adopt awesome technology along with everyone else.

This is all well and good, however there is still one unfortunate reality…

Video game engine technology is still lagging behind in several key ways. True innovators in the industry are finding that they must either compromise their vision to make it work with the middle ware engine options available, or find the money and time to move their own mountains to reach their goals.

Here are some of the ways in which modern video game engines such as (need I name them?) Unity, Unreal, and Hero engine (to name a few) fall short…

  1. Video cards are far more capable than video game engines can tap into because the software paradigms that these engines were built on are now many years (perhaps a decade) out of date. To modernize the core architecture of existing engines would require a huge and costly re-architecture. Even further, this kind of engineering requires specialized and capable "out-of-the-box" thinkers that are uncommon even in this tech-driven industry.
  2. The growing power of cloud computing and cloud storage combined with incredible data bandwidth is pushing the boundaries of what is possible to seemingly limitless levels. Building games to take advantage of this power requires an entirely new level of organization with a new level of tool chain and art and technology pipeline to be able to construct amazingly intricate worlds and experiences. If the tools and processes aren't robust enough, complex projects of the future will collapse under their own weight. This has happened before at moments of big change. An example of how challenging (and dangerous) these migrations can be, consider what happened to Midway when they made a serious effort to move into “open world” games cross platform using the best of “modern” technology of 2008. The effort was a big factor in killing the company. I used to teach a class to new hires at EA where I graphed the power of the PS1 on the board measuring processor power verses amount of RAM to produce a small dot not much larger than a pixel on the screen, then I placed a similar graph of PS2 power next to it that was about one inch square then finally showed the power of the PS3 which filled three 8.5x11" pages. In this example I explained how power in the machines tends to grow in two dimensions processor power verses RAM, but that power is spent in 6 dimensions on the game side, namely world size, world detail, character count, character detail, and character intelligence with the 6th dimension being the aforementioned (in part 1) giga-NOPS. The question "what do you do when you can do anything?" is a question that video game developers have to contend with more and more. This question will only get more challenging to answer in the years to come. Game engines must provide far more help for managing complexity than ever before. Modern game development tools are not yet up to the challenges before us.
  3. Modern engines have made first efforts to improve the way teams work together with various levels of success, but they fall far short of where they really should be for the large teams working on larger projects from remote locations on technology that will live and operate in the cloud. By talking to developers you can hear the stories of their challenges and complaints for lost work and effort because their tools restrict them from being as productive as they feel they could be. Even when developer tools and engines provide for group collaboration they are often full of gotchas that allow one developer to stomp on the work of another developer. For games to make forward progress requires a tremendous amount of middle management and thought cycles on protecting existing progress which takes away from the individual productivity. If this is true for games of today, how are these engines going to fare developing the games (or should I call them gaming ecosystems) of tomorrow? The problem is even worse than this because the most popular modern engine, Unity3D, is particularly bad in this department. It was propelled to number status by allowing small teams to easily develop cross platform cheaply and target the web at the same time, but for the next generation of innovation (VR, AR, and distributed cloud based games) Unity's core architecture will fall short of meeting developer’s needs. It will be relegated to playing a smaller role in the development of the same small games it has been used for in the past.
Why can’t modern video game engines adapt?

Can't existing engines adapt to changes in market pressures such as these? The short answer is "ultimately no" and the long answer is that of course they will try, but there are other forces acting on them that will hold them back. Here is where history is our best teacher. Video game engines have a natural life cycle that we've seen over and over. The ones that succeed, do so because they offer solutions to pain points that developers are facing at the time. If the pain points are particularly challenging, then the first engine to answer the challenge usually succeeds in big ways (such as Unity’s solving the pain that most game developers at the time faced in trying to move their games to the web). However, eventually two things happen. First, the more popular an engine is, the more teams license it for their games. More game teams using an engine causes the engine’s developer to spend more and more time dealing with support issues. Re-factoring becomes much harder, and re-architecting becomes costly, challenging, and time consuming. In a real way, the foundation of the technology solidifies into concrete. Meanwhile, innovations in games and technologies and the paradigms driving them continue to move forward. Smaller teams with solid funding are most often the ones to push the boundaries forward in little and sometimes big ways that over time create quite a large technology gap between where a leading game engine started out, and where the industry is currently exploring. So, while the developers of the most popular engines never stop looking for ways to innovate and provide for their clients, the fact that they have so many clients works against that innovation. Also, on the money side, they can make more money by releasing small updates quickly, but large re-architecting efforts cost a lot of time and money and increased support challenges for them and their customers. Not to mention how easy it is to become complacent when your current technology is riding high and there doesn't seem to be any significantly better technology ready to challenge them. Until they have a compelling reason, no video game engine developer will take on the challenge of seriously re-architecting their products.

So where do we stand with current engine technology?

Just thinking in terms of the modern FPS king Unreal, mid-level game engine Unity, and MMO middleware Hero, it is clear that there is a big gap between what the software technology of today offers, and what the potential really is. The core architecture within each of these engines is now years out of date and would require a tremendous effort to re-build to take full advantage of modern hardware and cloud based computing potentials. And, again, the most popular engine for most games, Unity, requires the greatest amount of re-architecture of them all. Even if these companies had the motivation and were willing to pay the cost, the time to make such changes would be measured in years effectively generating almost an entirely new engine. Their greatest asset is that they all have plug-in architectures to allow developers to customize and enhance features, but this can only get you so far. Ultimately their core architectures will continue to feel more and more restrictive to developers seeking to produce newer exciting experiences.

All of this said, there is reason for optimism! Industry forces for change are growing on many fronts in response to the increasing pains I’ve outlined. Innovative emerging companies with new paradigms of thought around cloud based development solutions, platform architectures, live services, and parallel computing have been quietly building technology and are showing great promise. These companies are being run by developers just like you and I who are tired of the inefficiencies of the existing solutions and are intent on helping developers of all levels and backgrounds get the most out of limited budgets to realize their creative ideas. One such company is MaxPlay which is building a next generation video game engine from the ground up with these modern technologies and principles at the heart of their design. I will go into more detail on these innovations in follow up posts. Please look for them on Gamasutra, on my company blog page at my Midnight Studios blog page, or wherever you may have discovered this post.

Thanks for reading!