top of page
  • Writer's pictureShannen Dorothee Tioniwar

Counter functional Probe : Loneliness and Cyber paranoia

Because Loneliness and cyber paranoia correlate with each other, I will combine and testify the speculations altogether. As a measure of efficiency due to time constraints, the idea of using a low-tech probe to elicit fresh creative data results would deem appropriate.


This probe uses 2 of the latest technologies:

A. A smartphone to testify for loneliness

B. Amazon Echo for Cyber paranoia.


I will test out these 2 technologies in separate counter functional probe (A design experiment that figuratively counters some of its own functionality). Disclaimer: The probes for the 2 devices: Smartphones and amazon echo are done separately, to 2 different audience groups based on the device availability.


Process:

physical functions of a smartphone and the Amazon Echo are covered separately to get responses regarding the loss of functionalities. However, this case only works with some android phones, hence my smartphone (Samsung) is used as an object sample for this test. For the smartphone, the experiment is conducted in an empty room to decrease distractions and increase the feeling of being alone in a space with a technological device, while the Amazon Echo experiment was done in their respective homes, where experimenters place their devices.


Evaluation:

Overall, I found that the appearance of the smartphone case affects our decision to submit to a choice over the functions of technological devices. Thus, in other words the blackboxing technique used to make the technological devices questions our trust towards a device. A pertinent quotation by IDEO: “trust is earned, not machine learned (Massa, 2018)” speaks for that context. We could conclude that loneliness is in fact a result of distrust over machines. Additionally, the idea of disrupting the relatively new and popular AI gadget, Echo, is quite interesting as it provides new insights that may be relatable to the growing market of AI technology. However, the idea of enforced transparency does not satisfy the notion of free will, making a decision based on the user’s own accord. Rather, it provides a permanent barrier that literally doesn’t allow for interaction between people and Echo. The decision factor here still falls under the authority of the case, forcing people to give consent over the dysfunctionality of the device.


With these extensive findings on the correlation of boundaries, transparency and trust, my project aims to “preserve human privacy by establishing boundaries between AI devices and their users, taking a stand point from the lonely and cyber paranoid through the themes of transparency and trust.” Additionally, the project is not meant to identify the right or wrong, but rather, to express a personal stand on privacy issues and ignite further discussions.


Noting the facts that AI leads to trade-offs: more speed-less accuracy; more autonomy-less control; more data-less privacy (Agrawal, 2018, 25), thus privacy should come with boundaries. Moving on to how people can set boundaries to their devices, the questions we should ask ourselves are: what are the data we value more than others; when should we draw a line and enforce our boundaries?

5 views0 comments

Recent Posts

See All
bottom of page