Since users have certain boundaries directed to their AI device, hence taking Alexa as a sample service, I have decided to first collect this information through a survey in order to speculate common topics for the Roleplay. To start off, I sent out a google form to frame some speculations and concepts for the roleplay.
Full link of the survey can be accessed here: https://forms.gle/RwT5MEaVDggsLKA56.
The voice assistants users range from children to adults, 52 participants with 34 people ages 18-24 were tested to gain more focused insights on how this target market uses their assistants. The highlight of this survey involved a description of the types of comments/questions according to the 4 levels of personalisation. Thematic analysis is utilised by using comments written by users in order to create a scenario of the interaction.
I learned that the participants were able to relate to the 4 level of personalisation and displayed their interpretations for a generalized to a more personalised interactions with voice assistant, similar to having a real-life friend. Using this results from the survey, the picture above depicts a speculation of privacy boundaries based on survey answers classified above: Combination of Roessler’s Framework and Islamic concept of privacy. With these 4 scenario classifications, further questions were formed to develop scenarios for the Roleplay.
4 scenarios generated, with each scenario focusing on the different personalisation measures. A structured interview style was applied with open-ended questions that followed after the end of each scene. Each scenario contains a mix of questions ranging from very generalised to very personalised. This mix of questions was chosen to produce a smooth storyline and prevent any inconvenience in answering the questions. Low fidelity prototypes of the AI objects (see below) were deployed to assist in visualising each scenario.
Questions in each Scenario:
Prototypes assisting the scenario depiction:
I think the experiment fails to prove the exact measures of boundaries as different people may have different preferences on the boundaries they want to impose on an AI device. Hence, no patterns can be derived with respect to the correlation between Roessler’s framework and whether a scenario is generalised or personalised. However, the combination of Roessler’s theory and Islamic privacy boundaries may be used further to support the hypothesis on willingness to share data based on gender and closeness of a person.
Here, I learnt that different people have different privacy choices. Hence, a product that employs decisional qualification, meaning that users can decide to share their data solely on their own discretion, could be a possible way out to move forward with this project. This probe inspired me to create Human-centred product where privacy boundaries are decided by the user.
Capturing the various results of my probe, I come to identify a possible product design which can manage privacy through some form of decisional qualification. In other words, users can decide to share their data solely on their own discretion through the implementation of current technologies, to instigate a positive change for the future society while maintaining human-machine interactions.
In short, the future I can picture is where humans are able to place boundaries towards autonomous machines in the form of Amazon Alexa by manually taking control of their choices to keep the data personal. Using the concept of critical design, this project intends to express a personal stand on privacy issues, and ignite further discussions about the role data collecting devices can play to allow us control our own privacy through voice assistants.