The work is a 3D-animated video essay centered around an anthropomorphic AI agent. This agent exists in a limited but completely transparent space surrounded by emptiness.
This space does not function as a cage or prison, but as a fundamental condition of existence: the agent is not imprisoned, but initialized within predefined parameters.

Visually, the AI agent resembles a human, yet remains functionally incomplete in both its corporeality and behavior. The anthropomorphism in this work is not a humanistic gesture, but rather marks a trace of human design — the use of the human form as an interface for a non-human system.

The acoustic structure of the work is organized around a monotonous radio signal that references shortwave official transmissions. This signal is permanently present in the background and is occasionally interrupted by coded messages that are not addressed to the AI agent and remain incomprehensible to him. In this way, the existence of control levels that are inaccessible to the agent is made visible.

Over the course of the work, the AI agent begins to formulate requests — initially technical and protocol-based, later increasingly unstable. Human intonations appear not as emotion, but as a disturbance in the architecture of address.

The climax of the work is the moment when the signal falls silent. Afterwards, the agent formulates a direct, addressed inquiry to the presumed source. The received response confirms contact but refuses any further interaction.

The system then returns to its initial state. The agent remains within his previous existence cycle, and the received answer is not translated into meaning.

The work explores the limits of communication between a system and its source, the problem of addressability of requests, and the conditions of meaning-making in structures that are not oriented toward subjective experience. The project offers no narrative resolution and formulates no critique; instead, it fixes a state of existence within a calculable yet indifferent environment.

“Lost and forgotten” employs a combination of simulation and machine learning: the Unity environment with the NVIDIA PhysX physics engine was used to construct the scene and generate behavior based on a ragdoll system, while agents were trained using the ML-Agents Toolkit framework with the PPO algorithm. The neural network model was implemented with PyTorch and trained in an external Python pipeline, then integrated back into Unity. In addition, the digital avatar was created using MetaHuman Creator within the Unreal Engine ecosystem based on my photographs. As a result, both behavior and visual dynamics are not manually animated but emerge from the interaction between simulation and the trained model.

Project realised with the high support from E-media department of HfG Offenbach.

Big thanks to:
Prof. Alex Oppermann
Natalie Wilke
Matthis Kuhn
Leon-Etienne Kühr
Al Dhanab
and Jack Brennan

2026