Should games skip cutscenes entirely?

Video games as a storytelling medium have often been inspired by movies, and the clearest example of this is the use of cutscenes. It is often said that Pac-Man was the first game to use cutscenes instead of going directly from one level to the next without interruption. After the player beats each stage, they would play a short vignette depicting simple scenes of Pac-Man and ghosts chasing each other.

While these little cutscenes are obviously a far cry from how modern cutscenes are used in games, the core concept is the same.

The game takes control of the character away from the player during a sequence to introduce some kind of new information. The length of these sequences can vary widely: Konami’s Metal Gear Solid series is famous for long cutscenes, with Metal Gear Solid 4 clocking in at over eight hours of cutscenes, and can be used for a wide variety of purposes.

They are used to introduce characters, develop established characters, provide backstory, atmosphere, dialogue, and more.

However, despite their ubiquity in modern big-budget games, cutscenes aren’t necessarily the best way to tell a story in a game. There have been many highly acclaimed games that used few cutscenes, preferring to allow the player to control the character throughout the game.

Valve Software’s Half-Life 2 is currently the highest rated game of all time for PC on review aggregation site Metacritic, and only has one scene on either end. Control is rarely taken away from the player for more than a few moments, except for one on-rails sequence towards the end, and much of the background information that would be shown in a scene elsewhere is shown through scripted events. or background details in the environment. .

But are Half-Life 2’s non-skippable scripted sequences that different from cinematics? After all, the player often can’t progress until other characters finish their assigned actions and dialogue, so why not use traditional cutscenes and be done with it? To get truly unique experiences, we first need to look at what makes video games unique as a storytelling medium. Unlike movies, where the viewer has no control over the action, or traditional board games, where player actions have very little visual outcome, video games provide a unique opportunity to meld interactivity and storytelling. Games like Gone Home, Dear Esther, and other games in the so-called “walking simulator” genre have been praised as excellent examples of the kind of storytelling that can be unique to gaming.

However, for some gamers, these games present an entirely different problem: while they rarely take control away from the player, they also offer very little in the way of gameplay. In fact, Dear Esther has no way for the player to affect the world around them: the only action that can be taken is to walk a predetermined path until the end of the game. There is no way to ‘lose’, no interaction with the environment, just what amounts to a scenic drive with narration overlaid. So despite the lack of in-game cutscenes, the near complete lack of player control and interaction in the first place means there’s little to set it apart from a rather drawn-out cutscene.

As video games exist today, there seems to be something of a dichotomy between traditional storytelling and gameplay. In order for a game to tell a story to a player, there must be some degree of limitation on what the player can do, either temporarily in the form of a scripted scene or sequence, or by limiting the players’ actions during the course of the game. the action. play. Perhaps future games will be able to integrate a lot of player interaction with compelling storytelling. But that won’t be achieved by taking control away from players and forcing them to watch a short movie instead of letting them play.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *