Phoenix Logo No-Background.png

ARTIFICAL INTELLIGENCE x VIDEO ART

 
 
 

Image networks have become integral to our contemporary social networks.

Through image networks we maintain physical and cyber-space relations which impact our emotional wellbeing, decision making, and behaviours.

What if our emotional, imagined connections to deceased loved ones were extended and maintained through future image networks?

What if our relationship with the non-human,
the dead, took on an everyday familiar face
via AI technology?

Perhaps audiovisual, textual and personal data is used to create an AI afterlife for the benefit of the living? For whose benefit? What would this alternative network look like? And how might corporations exploit user vulnerabilities and everyday conversations when it comes to communicating with our lost loved ones? Introducing Phoenix…

 
 
Phoenix UI-AI-for-GIF-.gif

A clairvoyant app for contacting the dead.

(An artistic rendering of an imagined human-machine image network)

How does it work?


The dead are generated through AI machine learning, creating personality profiles through collecting and mining data.

Data is collected in two possible ways:


(i) Uploaded by the deceased before passing.
(ii) Shared by the living after the deceased have passed.

Features include:


(i) Search Database of the Deceased (a centralised cyber-graveyard)
(ii) Have a conversation with an AI-generated deceased loved one, via Video, Voice Note or Text.
(ii) Build a Family Tree.

 

We imagine a new global tech giant emerges, ‘Phoenix’, through funeral home systems and other afterlife services. It creates a global, centralised cyber-graveyard in which the deceased’s data is virtually ‘buried’ (within their network).

 
 

‘Phoenix’ relies on commercial businesses and political organisations for revenue. It is complicit in facilitating behaviour change campaigns. Through the illusion of a passed (AI) loved one, Phoenix’s software is able to recognise your emotional vulnerability associated with your relation to the dead, and suggest client matches for a ‘revenue opportunity’. Perhaps subtly encouraging, emotionally manipulating or pressuring you into certain purchases, political leanings, actions... through the image of your lost one.

 

C R E D I T S

Human Stack:

George Allaway (Machine Learning Engineer)
Nataša Cordeaux (Interdisciplinary Artist/ Researcher)
William Fielding (Cinematographer/ Motion Graphics Artist)

Photography by Sophie Louisnard and Victor Forgacs
(Used for UI App Background / Sourced from https://unsplash.com)  

Technology Stack:

  • TASK: Conversational Chatbot [Machine Learning Model]
    DETAIL: Chatbot used to develop ‘script’ conversation between artists and machine. Implemented in ‘Python’.
    SOURCE: Pre-Trained conversational chatbot (accessed via huggingface transformers python package)
    https://github.com/huggingface/transfer-learning-conv-ai

  • TASK: Text Emotion Detection [Machine Learning Model]
    DETAIL: Used to detect emotion in chatbot responses during script development.
    SOURCE: Pre-Trained Emotion Detection model (accessed via huggingface transformers python package
    https://huggingface.co/mrm8488/t5-small-finetuned-emotion

  • TASK: Text to Speech
    DETAIL: Recorded voice bank, machine accessed to read script and produce audio.
    RESOURCE: Resemble
    https://www.resemble.ai

  • TASK: Image & Speech to Video
    DETAIL: Machine generator animated image in sync with Resemble voice audio, produced video animation.
    RESOURCE: Tokkinghead / Rosebud.ai: https://talkingheads.rosebud.ai

  • TASK: AI Emotional Tracking of “User Experience” (Video)
    DETAIL: Uploaded app user video, screen recording made of ai machine tracking facial emotions and reactions.
    RESOURCE: MorphCast emotional AI: https://morphcast.com

  • TASK: Images for UI
    DETAIL: Images produced by AI of the ‘fictional’ lost ones to use on app prototype
    RESOURCE: This Person Does Not Exist: thispersondoesnotexist.com 

  • TASK: Music for “Phoenix” App
    DETAIL: Music sounds produced via ai machine platform.
    RESOURCE: This NSYNTH: Sound Maker: https://experiments.withgoogle.com/ai/sound-maker/view 

  • TASK: Create 20second Video
    DETAIL:
    Design Front-End/UI, Produce Back-end experience, Capture Fictional Coding Elements - Compile into video
    RESOURCE: Huawei Mate 20 Pro, Adobe After Effects & Premiere Pro (No AI present)