top of page

Platform:
Internal tools / systems / processes

Engine:
Unreal Engine 5

Language:
Unreal Blueprints / C++

Tools used:

Unreal Editor, JIRA, Confluence, P4V

Duration:

June 28, 2022 - 2025

Team size:

600+

Current status:

  • Work finished

Roles

  • Senior Technical Level Designer on PCF Core systems 

  • Owner of UE5 gameplay scripting system and Blueprint framework

  • Owner of new Quest System

  • Owner of AI Ant Farm System 2.0 (AI pre-/non-combat behavior)

Other

  • Created company's intake assignment, interviewed new candidates

  • Trained / gave workshops to Tech team about Blueprints and UE5

  • Reviewed Tech Team member's Blueprints, documents, and deliveries

  • Bug fixing

PCF Core description:

Tasks which I performed for a quickly growing and expanding company, that needed to create systems and processes that can scale across projects. Prior to this, nearly all work was project-specific, and often wasted once each project ended.

In order to get our projects prepared for Unreal 5 World Partition, and get rid of Level Blueprint usage in our game, coders and Technical Level Designers cooperated to create a new gameplay scripting system known internally as declarative patterns. I owned this system and was responsible for its development as well as all involved Blueprints.

 

Declarative Pattern Blueprints

Declarative Patterns are Blueprint actors that are always-loaded in World Partition, and can't be placed on data layers. They control the majority of high level gameplay logic, making heavy usage of facts to communicate both between themselves and with other systems of the game. Patterns need this always-loaded state so they can listen to and manipulate facts, and so their logic does not get streamed out if a player moves out of range of its World Partition Cell. Because patterns are always-loaded, they must have a low memory footprint (I generally aimed for < 2MB).

My role: feature owner, designed and created the BP framework, created most of the declarative pattern Blueprints, created BP standards for our department, created our BP review process and performed the reviews

Level Design Blueprints

LD Blueprints (LDBPs) are what we call our lower-level Blueprint actors, that are not always-loaded. They should never handle higher level logic like fact manipulation. They usually are controlled by one or more declarative patterns.

My role: feature owner, created & maintained most of the LD Blueprints, created BP standards, performed reviews

Blueprint framework

The Blueprint framework covers both Declarative Patterns and LD Blueprints, and was my personal invention. It handles all of the shared functionalities of both declarative patterns and LDBPs, by implementing parent/child structures, reusable logic chunks, and interfaces. It also ensures all patterns have similarly functioning and named variables, such as for triggering.

My role: feature owner, designed/created/implemented the framework from scratch

Tech LD Blueprint Standards

I invented and implemented our team's Blueprint standards and peer review process. Before my efforts, the team had no such standards or review process in place at all.

image.png
image.png
DPicons-DP_DataLayer.png
image-2023-4-24_12-22-4-12.png
image-2023-4-24_12-22-4-17.png
image-2023-4-24_12-22-4-13.png
image-2023-4-24_12-22-4-20.png

Examples
 

Declarative Pattern Blueprints created by me:

  • Parent LDDP: parent class, holds all shared functionalities of declarative patterns

  • DataLayerController: allows streaming of Unreal 5 DataLayers

  • QuestController: Allows manipulation of player's Quests and Quest Objectives

  • ActorEventController: Allows firing of custom events on other actors, and creation of a chain / sequence of events

  • VOPlayer: plays NPC VoiceOvers + lipsync + subtitles

  • PlayerCameraFadeController: fades in/out the player cameras

  • CustomFunctionController: Misc (code) function calls, such as open character customization, or execute console commands

  • ActionPointController: Provides ways to manipulate AI ActionPoints, used in our Living World - Ant Farm System 

  • DestroEventController: Allows control over special Chaos Destruction BPs

  • SimpleCharacterController: Allows control over non-AI NPCs

QuestController

Allows manipulation of player's Quests and Quest Objectives (start, update, fail, etc).

image-2024-2-27_20-42-4.png
LDDP_QuestControllerExampleOverlapArena01GIFOpt.gif

DataLayerController

Allows various functionality for streaming in/out Unreal 5 DataLayers.

image-2024-2-23_20-11-46.png
DataLayerControllerRandomLoad.gif

PlayerFadeController

Fades in/out the player camera(s). 

image-2024-3-1_20-20-55.png
PlayerCameraFadeControllerGIF.gif

Voiceline Player

Plays VOs, adds localized subtitles, plays facial/lipsync animations. Audio can follow a chosen target if its moving.

image-2024-2-29_15-11-50.png
DebugVOSettingGIF.gif

ActionPointController

Provides ways to manipulate AI ActionPoints (SmartObjects), used in AI non-combat situations. Main purpose is force start combat / force AI out of their Ant Farm ActionPoints. 

ActionPointControllerEdited.png

Example GIF can not be shown yet due to project NDA

nda-icon-1t.png

ActorEventController

Allows firing any custom events/functions on any other actor in the project, and creating chains / sequences of events

actoreventcontrollervariables.png
ActorEventControllerExampleGIFOpt.gif

DestroEventController

Allows control over special Chaos Destruction BPs, such as setting itheirstate, or starting a destro transition and level sequence

DestroEventControllerEdited.png

Example GIF can not be shown yet due to project NDA

nda-icon-1t.png

SimpleCharacterController

Allows control over non-AI NPCs, such as to play or change set animations, or start/stop movement along a spline

SimpleCharacterControllerSettings.png

Example GIF can not be shown yet due to project NDA

nda-icon-1t.png

Other declarative pattern Blueprints reworked and maintained by me:

  • AnimatedActorController: Allows control over animated props such as doors

  • ChangePlayerAbility: assigns various gameplay effects or abilities to the player(s)

  • EncounterController: provides various ways of controlling a combat encounter

  • ObeliskController: Starts an encounter where player must survive waves of enemies and stand in range of an obelisk until it has fully completed charging

First POC / Feature Owner

As owner of the declarative patterns and LD blueprints, I was first point of contact for any questions about level logic, Blueprint requests, facts, and any related bugs, questions or concerns. I also drove the development and planning.

The Quest System allows creation and execution of Quests for the players. I owned this system and worked alongside programmers to create and maintain a brand new system across many iterations.

image.png
Quest_Main_Available_Icon_001.png

Quest Graph
 

The core of this system is a custom quest graph tool we created for developers in Unreal 5. In here designers can see a visual overview of all the steps / objectives required, branching paths, and transition conditions, and adjust them on the fly.

image-2024-1-5_14-40-29.png

Index:

1. "Start Quest" node. Must be the start of every graph. Dragging an arrow onto another node connects it to the next step

2. "Objective" node. Many types of objectives and subobjectives are possible. Objectives can be parallel, sequential, or both

3. "End Quest" node. Must be at the end of the flow.

4. Selecting any node allows you to edit details about it, such as facts required for completion, objective text, marker logic, or subobjectives inside it

Quests are updated through declarative patterns - which I also owned (see section above).

Objective types

Throughout development many iterations were needed, such as implementation of more objective types (boolean, integer, timer/location-based, sub-objective, etc..). I created the Technical Designs needed for the programmers to implement them, tested and documented them as they became available, and explained new features to the level design team wherever needed.

Saving & Restoring process

One of my biggest endeavors was helping to figure out how we would handle checkpoints, saving and loading on Unreal 5, especially taking into consideration drop in drop out 3 player co-op. This was a tough process that went through many iterations across several months. It involved help from various code departments, such as online, gameplay, and UI.

In the end I came up with a fairly robust solution, in which the quest system is able to save progress in various ways as the quest graph updates, and declarative patterns set up the state of the game when loading in by checking the player's facts, active quest, and progress within their current quest. 

Quest Example Level

I maintained a Quest System Example Level for our project with many example setups of various quest types and objective types. This sample level was used both to explain the quest system for new people, and as a testing ground for new and existing features. Coders especially appreciated this level for the ability to quickly test out their new features.

First POC / Feature Owner

As owner of the quest system I was first point of contact for any quest system feature requests, bugs, questions or concerns.

I invented the AI Ant Farm 2.0 system to replace our aging and outdated system used on Outriders and Outriders: WorldSlayer. It manages all pre-combat and non-combat AI behaviors, competes with the systems seen in other AAA games, and solves all the frustrations, lack of features, and lack of tools our team experienced on the old system.  I solo designed this entire system from scratch to finish, and was simultaneously responsible for all of the Living World Content on projects I was assigned on.

image.png

ActionPoints

AI ActionPoints: these are our custom version of "SmartObjects", Blueprint actors that are prepared by technical designers (myself), and placed on levels to control the behaviors of AI Characters.

 

ActionPoints work by having a number of slots that can be occupied by AI Characters. An AI can occupy and move into a slot in the game world and take actions as pre-defined in the ActionPoint's graph, for example:

  • Play (sets of) animations, with various transition conditions between them

  • Equip/unequip weapon

  • Play AKEvent sounds or Niagara VFX

  • Spawn and attach (or de-spawn/detach) objects , such as a cup to hold

  • Make fact changes that affect the rest of the level logic (for example to open a door)

  • Perform synchronized actions with other AI

  • React to combat start

  • Check/Set custom variables, and more..

image.png

Image: Example ActionPoint Graph

AI Simulation - Needs and Memory System

AI Needs Simulation: AI have hidden needs variables to try and fulfill, similar to what you might see in a game of The Sims. Usage of ActionPoints fills or drains certain needs values, causing constant adjustments. Each AI autonomously moves around the scene and tries to fulfill its own needs, resulting in interesting emerging behaviors. Needs were designed to be non-project specific and easily adjustable for designers. In cases where these auto-behaviors aren't preferred, custom setups are possible.

image.png

AI Needs Strategies / Query Filtering: In order to create AI archetypes that behave differently from each other, AI can have various strategies and priorities when fulfilling their needs. For example, snipers might put priority on fulfilling Safety, whilst Berserkers follow their impulses and prefer Fun and negative Safety values. Riflemen or Captains could be more inclined to organize and delegate tasks to others, prioritizing Social and Contribution values.  These types of categorizations help players easily recognize archetypes when approaching combat situations, allowing them to formulate strategies more quickly.

AI Memory system: AI remembers the last couple ActionPoints they already used before so they can try and avoid them. 

Image: Example needs ranging from -100 to +100 values. "Sub-needs" within needs are possible.

Ant Farm Chains and Patrols

Ant Farm Chain: As simulation isn't an end all be all solution, I also allowed designers to create manually scripted behaviors by chaining together ActionPoints after one another. An Ant Farm Chain actor can be placed in the scene to achieve this. At the end of the chain, the designer chooses what to do. For example restart or reverse the chain, idle at the end point, or switch to simulation behavior.

Patrol System: It's possible to avoid ActionPoints all together and just patrol a set of locations. I created a spline tool to aid in the setup of such a patrol. Group patrols with multiple AIs are also possible. 

Patrols_CreationTool_17-02-2022Cropped.gif

Image: Example of the Spline patrol tool I made (this was when the project was still in Unreal 4)

Ant Farm Chunk

Ant Farm Chunk: High level actor in the scene that references / contains all its AISpawners, (to be) spawned AI, ActionPoints, and some settings. When on simulation behavior, AI Spawned on a chunk can only use ActionPoints within that chunk. This allows the setup of many "bubbles" of Ant Farm across a level, that can each be individually streamed in/out for performance.

antfarmchunk.png

Image: Example Ant Farm Chunk connected to several AISpawners and ActionPoints

Perception - Presets, Avoidance, Attraction

Perception overrides: AI perception can be overridden to various temporary presets while on Ant Farm behavior. This is because AI requires far bigger perception ranges while in-combat, compared to pre-combat situations. In pre-combat, the player should be able to sneak up and observe the AI behaviors. In some cases, AI shouldn't even react to the player at all until a manual trigger like a fact change or player trigger happens. When the fight starts, the AI switches to their standard combat presets. 

image.png

Image: Example AI observing other AI and cubes with its sight sense

AI avoidance: System that allows AI Characters to avoid each other, exactly how and with what ranges. I personally set up the avoidance ranges for every enemy and hub character on the project I was on.

AI Attractors: ActionPoints are able to "attract" AI towards them at a specific time. For example, a musician starts playing, and a crowd is drawn to the point with high priority.

Pre-combat - Holstering, Alerting, ForceOuts

Equip/holster system: AI can equip or holster weapons when entering or exiting ActionPoints. For example, an AI that normally patrols around holding a rifle might need both hands free to perform an action. They will holster the rifle on their back to get into the ActionPoint, perform the action, then re-equip the weapon and go back to patrolling. 

Gradual alerting system: Rather than becoming instantly aware of the player when perceiving them, enemies gradually increase their perception level, similar to what you might see in a lot of stealth games. If alerted but not fully in combat state, they can make VO callouts and search for potential threats.

ForceOut: An AI being suddenly forced out of an ActionPoint they are currently on, or forced out of Ant Farm behavior entirely, is called a ForceOut. Usually this is caused by a combat trigger, such as the player making noise or being spotted by an enemy. 

Conditional ForceOut: A feature which allows designers to put in extra conditions for allowing a ForceOut to happen. For example, the character ignores its sight and hearing sense but will react to damage, or a fact condition must be true. 

Extended ForceOut: Allows AI to run to a "last effort" point before starting combat. For example, running to press a button, open a cage, or destroy something on the level.

Custom Editor Tools

I designed an Unreal 5 Editor Mode toolset for our developers to set up Ant Farm content, and worked with a tools programmer to make this toolset a reality. 

antfarmeditorsmall.png

Image: The Ant Farm editor mode menu. Due to NDA I cannot yet show the tabs in action, as it had project-specific content.

The tool allows beginning to end setup, tweaking, and managing of Ant Farm on levels for designers.

 

Tool button explanations:

  • SEL: Within currently selected chunk, allows clicking on Spawners and ActionPoints in the viewport to select them. When selecting an ActionPoint, user sees detailed information about the AP and a preview video in a window that pops up. They can also tweak various settings in the details

  •  +CHUNK: Switch to a special mode where user can pre-set up some settings for a chunk, then click in the viewport at a place of choice to create a new Ant Farm Chunk. After confirming, the chunk is saved as a new sub-level, and a data layer is automatically created. The editor then automatically selects and switches to editing this new chunk

  • +AP: Switch to a special mode where user can select a new ActionPoint to place from handy dropdown menus, accompanied by detailed information and preview videos. User can then click in the viewport to place the selected ActionPoint. AP is automatically connected to the chunk currently being edited.

  • +SP: Switch to a special mode where user can place spawn points by clicking in the viewport. SP is automatically connected to the chunk currently being edited.

  • DP: Delete Point. Deletes currently selected Spawn Point(s) and/or ActionPoint(s) safely

  • DC: Delete Chunk. Deletes currently selected Ant Farm Chunk safely, including all ActionPoints and SpawnPoints inside

  • VAL: Validate Chunk. Runs various validation checks on the setup. If problems are found, these are displayed in warning or error messages inside the tool

  • HELP: Pops up a menu with buttons that can be clicked to access various important user documentation on Confluence

  • HGL: Highlight mode. Shows all the connections between the Chunk, ActionPoints, SpawnPoints, Ant Farm Chains, as well as any potential connection to a combat encounter

Users can select an Ant Farm Chunk either with the dropdown next to "Chunk", which populates automatically with all chunks currently on the level, or by clicking on the chunk of choice in the viewport. 

ActionPoint Graph

I designed the ActionPoint graph, which makes setup of ActionPoints much easier than our SmartObjects before. The graph is split up in "stages", where a technical designer sets up all the precise animations (or randomized arrays) to play at each stage, conditions for transitioning to other stages, and modifier effects to apply to the AI either in current stage or in the entire graph. Example modifiers are ignoring gravity, or disabling hit reactions. 

 

Aside from designing the graph itself, I set up 130+ ActionPoints with such graphs on one of the projects I was on, confirming its effectiveness.

apgraph example.png

Image: Example part of ActionPoint Graph. Some blurring for project-specific NDA reasons. It was intended to be supported by a  visualizer tool later.

Encounter System Interaction

I designed how the Ant Farm should interact with our Encounter System, a different generic system which controls spawning enemies and managing active combat encounters. Designers can add a special type of Ant Farm waves to the graph, with a setup inside each wave that's unique to Ant Farm, and connect them to the combat waves with transitions.

Image: Example Encounter Graph with an AntFarm wave spawned on the left side. It is connected to a combat wave by a transition.

antfarmwave2.png

Image: Example of the setup of an Ant Farm wave inside the encounter graph. Users can make unique setups for 1-2-3 player situations by switching tabs with the buttons at the top.

Test map

On my own accord, I designed and maintained a custom test map, that contained all Ant Farm features, all AI Characters, and All AI ActionPoints, for easy testing and bug fixing. This was greatly appreciated by everyone on the project.

First POC / Feature Owner

As feature owner and sole technical designer of the Ant Farm 2.0 System and Living World Content, I was first point of contact for anything involving this feature. New feature or content requests, bugs, questions, concerns, etc. I also drove all the development and planning.

Game Development Portfolio © 2025 - Goran Xaverius de Ruiter, the Netherlands.

  • LinkedIn Social Icon
  • d21fa275b6592fb7
bottom of page