


Platform:
Internal tools / systems / processes
Engine:
Unreal Engine 5
Language:
Unreal Blueprints / C++
Tools used:
Unreal Editor, JIRA, Confluence, P4V
Duration:
June 28, 2022 - 2025
Team size:
600+
Current status:
-
Work finished
Roles
-
Senior Technical Level Designer on PCF Core systems
-
Owner of UE5 gameplay scripting system and Blueprint framework
-
Owner of new Quest System
-
Owner of AI Ant Farm System 2.0 (AI pre-/non-combat behavior)
Other
-
Created company's intake assignment, interviewed new candidates
-
Trained / gave workshops to Tech team about Blueprints and UE5
-
Reviewed Tech Team member's Blueprints, documents, and deliveries
-
Bug fixing
PCF Core description:
Tasks which I performed for a quickly growing and expanding company, that needed to create systems and processes that can scale across projects. Prior to this, nearly all work was project-specific, and often wasted once each project ended.
In order to get our projects prepared for Unreal 5 World Partition, and get rid of "Level Blueprint" usage in our game, coders and Technical Level Designers cooperated to create a new gameplay scripting system known internally as declarative patterns. I owned this system and was responsible for its development as well as all involved Blueprints.
Declarative Pattern Blueprints
____________________________________________________________________
Declarative Patterns are Blueprint actors that are always-loaded in World Partition, and can't be placed on data layers. They control the majority of high level gameplay logic, making heavy usage of facts to communicate between themselves and with other systems of the game. Patterns need this always-loaded state so they can listen to and manipulate facts, and so their logic does not get streamed out if a player moves out of range of its World Partition Cell. Because patterns are always-loaded, they must have a low memory footprint (I aimed for < 2MB).
My role: feature owner, designed and created BP framework, created most of the patterns for our game, created BP standards for our department, created BP review process and performed the reviews
Level Design Blueprints
____________________________________________________________________
LD Blueprints (LDBPs) are what we call our lower-level Blueprint actors, that are not always-loaded. They should never handle higher level logic like fact manipulation. They usually are controlled by one or more declarative patterns.
My role: feature owner, created & maintained most of the Blueprints for our game, created BP standards, performed reviews
Blueprint framework
____________________________________________________________________
The Blueprint framework covers both Declarative Patterns and LD Blueprints, and was my personal invention. It handles all of the shared functionalities of both declarative patterns and LDBPs, by implementing parent/child structures, reusable logic chunks, and interfaces. It also ensures all patterns have similarly functioning and named variables, such as for triggering.
My role: feature owner, designed/created/implemented the framework from scratch
Tech LD Blueprint Standards
____________________________________________________________________
I invented and implemented our team's Blueprint standards and peer review process. Before my efforts, the team had no such standards or review process in place at all.







Examples
____________________________________________________________________
Patterns created by me:
-
Parent LDDP: parent class, holds all shared functionalities of declarative patterns
-
DataLayerController: allows streaming of Unreal 5 DataLayers
-
QuestController: Allows manipulation of player's Quests and Quest Objectives
-
ActorEventController: Allows firing of custom events on other actors, and creation of a chain / sequence of events
-
VOPlayer: plays NPC VoiceOvers + lipsync + subtitles
-
PlayerCameraFadeController: fades in/out the player cameras
-
CustomFunctionController: Misc (code) function calls, such as open character customization, or execute console commands
-
ActionPointController: Provides ways to manipulate AI ActionPoints, used in AI non-combat situations
-
DestroEventController: Allows control over special Chaos Destruction BPs
-
SimpleCharacterController: Allows control over non-AI NPCs
Quest Controller
Allows manipulation of player's Quests and Quest Objectives (start, update, fail, etc).


DataLayerController
Allows various functionality for streaming in/out Unreal 5 DataLayers.


PlayerFadeController
Fades in/out the player camera(s).


Voiceline Player
Plays VOs, adds localized subtitles, plays facial/lipsync animations. Audio can follow a chosen target if its moving.


ActionPointController
Provides ways to manipulate AI ActionPoints (SmartObjects), used in AI non-combat situations. Main purpose is force start combat / force AI out of their Ant Farm ActionPoints.

Example GIF can not be shown yet due to project NDA

ActorEventController
Allows firing any custom events/functions on any other actor in the project, and creating chains / sequences of events


DestroEventController
Allows control over special Chaos Destruction BPs, such as setting its state, or starting a destro transition and level sequence.

Example GIF can not be shown yet due to project NDA

SimpleCharacterController
Allows control over non-AI NPCs, such as to play or change set animations, or start/stop movement along a spline

Example GIF can not be shown yet due to project NDA

Other patterns reworked and maintained by me:
-
AnimatedActorController: Allows control over animated props such as doors
-
ChangePlayerAbility: assigns various gameplay effects or abilities to the player(s)
-
EncounterController: provides various ways of controlling a combat encounter
-
ObeliskController: Starts an encounter where player must survive waves of enemies and stand in range of an obelisk until it has fully completed charging
The Quest System allows creation and execution of Quests for the players. I owned this system and worked alongside a programmer to create and maintain it across many iterations.


Quest Graph
____________________________________________________________________
The core of this system is a custom quest graph tool we created for developers in Unreal 5. In here designers can see a visual overview of all the steps / objectives required, branching paths, and transition conditions, and adjust them on the fly.

1. "Start Quest" node in the quest flow. Must be the start of every graph. Dragging an arrow onto another node connects it to the next step
2. "Objective" node. Many types of objectives and subobjectives are possible. Objectives can be parallel, sequential, or both
3. "End Quest" node. Must be at the end of the flow.
4. Selecting any node allows you to edit details about it, such as fact required for completion, objective text, or subobjectives inside it
Quests are updated through declarative patterns - which I also owned (see section above).
Objective types
____________________________________________________________________
Throughout development many iterations were needed, such as implementation of more objective types (boolean, integer, timer/location-based, sub-objective, etc..). I created the Technical Designs needed for the programmer to implement them.
Saving & Restoring process
____________________________________________________________________
One of my biggest endeavours was helping to figure out how we would handle checkpoints, saving and loading on Unreal 5. This was a tough process that went through many iterations across several months and involved help from various code departments, such as online, gameplay, and UI programmers.
In the end I came up with a fairly robust solution, in which the quest system is able to save progress in various ways as the quest graph updates, and declarative patterns restore the gameplay logic by checking the player's facts, active quest, and progress within their current quest. As I came up with and implemented part of this system, I was also the first point of contact when designers had questions about the setup of quests or declarative patterns on their levels.
Quest Example Level
____________________________________________________________________
I maintained a Quest System Example Level for our project with many example setups of various quest types and objective types. This sample level was used both to explain the quest system for people new to it, and to test out new and existing features. Coders especially appreciated this level for quickly testing out their new features.
First POC
____________________________________________________________________
As owner of the system I was first point of contact for any feature requests, bugs, questions or concerns.
The AI Ant Farm 2.0 system was invented to handle all pre-combat and non-combat AI behaviors. I designed this entire system from scratch to finish, as well as being responsible for all of the Living World Content on projects I was assigned on.

ActionPoints
____________________________________________________________________
AI ActionPoints: these are our custom version of "SmartObjects", Blueprint actors that are prepared by technical designers (myself), and placed on levels to control the behaviors of AI Characters. ActionPoints have limited numbers of slots that can be occupied by AI Characters. Occupying and moving into such a slot allows the AI to take various actions as pre-defined in the ActionPoint's graph, for example:
-
Play (sets of) animations, with various transition conditions between them
-
Equip/unequip weapon
-
Play AKEvent sounds or Niagara VFX
-
Spawn or unspawn and attach objects, such as a cup to hold
-
Make fact changes that affect the rest of the level logic (for example to open a door)
-
Perform synchronized actions with other AI
-
React to combat start
-
Check/Set custom variables, and more..

AI Simulation / Needs System
____________________________________________________________________
AI Needs Simulation: I designed a new system in which all AI have hidden needs variables that they try to fulfill, similar to what you might see a character do in The Sims. Usage of ActionPoints fills or drains certain needs values, causing constant adjustments. The AI self-autonomously moves around the scene and tries to fulfill its needs, resulting in interesting realistic behaviors. Needs were designed to be non-specific and easily expandable per project.

AI Needs Strategies / Query Filtering: In order to create AI archetypes that behave differently from each other in pre-combat, I designed a system that allows AI to have various strategies for fulfilling their needs. For example, snipers may run a preset that makes them put higher priority on safe areas, whilst rushers prefer to follow their impulses, running around and putting themselves in dangerious situations. Characters with a rifle could be more inclined to organise and delegate tasks to other AIs, whereas "badass" characters stand in the open and completely disregard their safety, standing out in the arena. These categorizations help players easily recognize archetypes when approaching a combat situation.
AI Memory system: I designed a new memory system that allows AI to remember which ActionPoints were already used so they can try to avoid them.
Perception
____________________________________________________________________
AI Perception: I created a system that allows AI perception to be overridden to various presets while on Ant Farm behavior. This is because AI requires far bigger perception ranges while in-combat, compared to pre-combat situations. In pre-combat, the player should be able to sneak up and observe the AI behaviors.

Attractors: A special functionality that allows APs to "attract" other AI towards them at a specific time. For example, a musician starts playing, and a crowd is drawn to the point.
AI avoidance: The system that allows AI Characters to avoid each other, exactly how and with what ranges. I designed the override system for Ant Farm, as we didn't want the same settings as in-combat, and set up specific ranges for every AI Character on the project I was on.
Ant Farm Chunk and Manual action chains
____________________________________________________________________
Ant Farm Chunk: The Ant Farm chunk is a high level actor in the scene that references / "contains" all its spawned AI, AISpawners, and ActionPoints, as well as some settings for the entire chunk. When on simulation behavior, AI Spawned on a chunk can only queue ActionPoints within that same chunk. This allows the setup of many "bubbles" of Ant Farm across a large open level, that can each be individually streamed in or out.
Ant Farm Chain: Although the focus of the new system is self-autonomous AI behaviors, I provided a way for Level Designers to script a sequence of ActionPoints if necessary by creating an Ant Farm chain, linking various ActionPoints together.
Ant Farm Chain / Patrol System: I designed and prototyped an intuitive in-editor toolset for setting up patrol paths for AI characters, for cases where this is preferred over the self-autonomous needs-based behaviors. The user can set up a sequence of ActionPoints to occupy, and choose what happens after the sequenvce is over (restart from first point, reverse, end chain)


Pre-combat actions
____________________________________________________________________
Alerted state: I prototyped an alerted state, where enemies are aware of player presence, but not yet fully starting combat. They will for example make VO callouts and search for potential threats.
Equip/holster system: I designed and implemented the weapon equip/holster system. This allows AI to free up their hands before using an ActionPoint that requires it, and re-equip weapon later if needed, for example when moving back to a patrolling behavior, or when starting combat.
ForceOut: This is what we call an AI being forced out of an ActionPoint they are actively using. Usually this is caused by a combat trigger, such as player making noise, or being seen by the AI or its allies. Usually results in combat behavior start.
Conditional ForceOut: A feature which allows designers to put in extra conditions for allowing a ForceOut to happen. For example, the character ignores its sight and hearing sense but will react to damage, or a fact condition must be true.
Extended ForceOut: Allows AI to run to a "last effort" point before starting combat. For example, running to press a button, open a cage, or destroy something on the level.
Custom Editor Tools
____________________________________________________________________
I designed an Unreal 5 Editor Mode toolset for our developers to set up Ant Farm content, and worked with a tools programmer to make this toolset a reality.

I designed the ActionPoint graph, which makes setup of ActionPoints much easier than our SmartObjects before. The graph is split up in "stages", where a technical designer sets up all the precise animations to play, conditions for various animation state transitions, and modifier effects to apply to the AI currently using the graph.

I designed and created an Editor Utility Widget for Ant Farm-related content, which allows for instance easy spawning of relevant actors and finding ActionPoints with preview videos and instructions.
I designed how the Ant Farm should interact with our Encounter System, which controls AI combat encounters. I designed how spawning of Ant Farm waves is set up in combat scenarios, and exactly which variables are available for Level Designers.


Test map
____________________________________________________________________
On my own accord, I designed and maintained a custom test map, that contained all Ant Farm features, all AI Characters, and All AI ActionPoints, for easy testing and bug fixing. This was greatly appreciated by everyone on the project.