Interrogating the Foundational Values of Silicon Valley-style User Experience

10699 words (43 pages) Dissertation

16th Dec 2019 Dissertation Reference this

Tags: Social MediaInternet

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Dissertation Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NursingAnswers.net.

Fettered, Obscured

Interrogating the Foundational Values of Silicon Valley-style User Experience

 

 

  1.        Introduction

This dissertation aims to engage with designers, social scientists, and companies involved with defining user experience of information on social platforms, in order to gain perspective on the systems of thought that shape user experience in live practice and as an ideology. Through this process, I hope to make theoretical connections that serve as a small contribution to research concerned with imagining environments within which standards for technology production can be developed, that do not constrain the legacy of digital innovation. In particular, I hope to study methods designers and researchers involved in technology production can use to maintain a sense of individual responsibility for the social routes and implications of their work.

In Art Worlds, Howard Becker reflected on the complex, interdependent layers of convention that mean cultural objects regularly take shape in standardised forms, and events in standardised formats.[1] He took the stance that it is the enmeshing of historic ,technical and theoretical conventions that make it unusual for a musical concert to last more than three hours, or the dimensions of a painting to not suit hanging on a wall. Bowker and Star later introduced ‘Infrastructure Inversion’[2], a research strategy developed for understanding how classification systems and standards are embedded in scientific practice, and further, how these standards are able to evolve with science itself. Similarly to Becker, they found that as classification systems develop, their description of reality became true, a principle they called convergence.[3]

Technologies often evolve faster than standards can be defined, and it is currently not possible to predict the future. Therefore, developers of standards and classifications inevitably conceive them in a state of partial blindness, as such that constraints and exclusions can be built into systems that couldn’t be imagined at their time of development. Jaron Lanier refers to this phenomenon when it occurs in the digital sphere as ‘digital lock-in’: “the process of lock-in is like a wave gradually washing over the rulebook of life, culling the ambiguities of flexible thoughts as more and more thought structures are solidified into effectively permanent reality”.[4]

Innumerable events across the last decade have overstated the fact that ubiquitous information technology in practice is not a neutral force, but one capable of embodying the innate biases of its users, and arguably more crucially, its producers. This latter capacity was recently exemplified through Christopher Wylie’s disclosures regarding Facebook and Cambridge Analytica.[5] With advancement from fields like machine learning accelerating rapidly, and new applications of these technologies poised to take deeply assistive roles in decision making through contexts varying from judicial rulings[6] to screenwriting[7], it could be argued that much discussion about future innovation revolves around “whether we can, rather than whether we should”, as aptly summarised by futurologist Richard Watson[8].

My text takes the stance that in this early stage of our conception of the digital sphere, it is of high importance that technology producers preserve a sense of individual responsibility for their contributions to innovation. Focusing on the development of user experience methodologies within social platforms, my inquiry will regularly turn to Facebook, as the largest social network[9], and therefore a company with significant influence over the development of Silicon Valley’s prevailing standards for user experience (UX) design and research. 

  1.         Lucy Suchman’s Theory of Located Accountability

Research and design standards for UX may gradually become entrenched, or ‘digitally locked in’ as they develop. Proceeding with Bowker, Star and Lanier’s points of view, these standards must inevitably be being conceived in states of partial blindness. The producers of these standards must be unable to for-see the future constraints of their work, before it ‘converges’ and becomes intrinsic to the reality of what user experience can be.

Lucy Suchman agrees that the perspectives of technology producers and innovators must be limited, advocating ‘Located Accountability’:

“The fact that our knowing is relative and limited by our locations does not in any sense relieve us of responsibility for it. On the contrary, it is precisely the fact that our vision of the world is a vision from somewhere – that is inextricably based in an embodied, and therefore partial perspective – which makes us personally responsible for it. The only possible route to objectivity on this view is through collective knowledge of the specific locations of our respective visions.”[10]

Objectivity is highly valued in user centered design processes. A line broadly accepted is that upkeeping an objective quality throughout design processes, partially by relegating emotional human intervention, allows the “fundamental psychological underpinnings and biological traits that influence perception”[11] to take precedence. This pragmatic approach is undoubtedly crucial to UX processes, considering the super-scale audience a user interface might be designed to serve, and the fact that user’s accessibility requirements and other preferences may vary wildly. Is it possible however, that there is more than one frame through which objectivity can be defined, and therefore more than one way emotional intervention can be relegated during design processes?

Science example

Lucy Suchman’s notion of Located Accountability uses feminist reconstructions of objectivity to present a tool for the development of alternative practices and transformative politics in technology production. In Located Accountabilities in Technology Production, Suchman engages with the locations of and boundaries between technology companies, designers employed in technological production, and end-users of technology. Her extrapolations include the idea of ‘Detached Intimacy’, a term used to describe the elaborate social environment that often exist in academic and technological production networks, whilst the communities themselves remain distanced from their work’s recipients, “cut off from others who might seriously challenge aspects of the community’s practice”.­[12] Perhaps most pertinent to consider where objectivity is concerned, Suchman’s idea of ‘Design From Nowhere’, describes the removal of human intervention from the process of design to prevent bias, which can mean “designers are effectively encouraged to be ignorant of their own positions within the social relations that comprise technical systems”[13].

In support of my inquiry, I will employ Lucy Suchman’s notion of Located Accountability as a tool for identifying limitations of technology producer’s perspectives that are observable through study.  Considering technology companies, their products, their employees and their end-users from the points at which each are situated, my inquiry will be founded in between these locations, with focus on dynamics that are emerging, or may emerge in the near-future as a result of friction at these mid-points.

I will also attempt to distinguish the boundaries that characteristically separate the prevailing Silicon Valley view towards UX from other related practices, for example, historic approaches to the gathering of audience insight, and the founding principles of contemporary human- centered design and human computer interaction.

  1.       User Centeredness and Hegemony

In 1955, industrial designer Henry Dreyfuss introduced “Joe and Josephine[14]”, putting forward the need through this archetypal pair of imagined consumers mapped by anthropometric charts of their lives, for thorough psychological and ecological evaluation of human factors in order to design products deeply appropriate to their audiences. With audience dial testing also having been conducted in television since the 1960s[15], decades of services and commodities could be considered, to some extent, the outcome of their own direct addressal of end-consumers.

Dreyfuss’s fastidious theorisations of Joe and Josephine could be seen as proto-personas that introduced user centered design to a breadth of technological fields for decades to come. His legacy did not immediately translate into software design however, as the industry fully emerged in the 1980s, with focus at the time on solving the technical aspects of product design. With end-user software design bound by the frameworks of a much more arcane range of programming languages and databases than those at our disposal today[16], programmer, consultant and creator of BASIC Alan Cooper’s introduction of the persona and the concept of goal-oriented design as practical interaction design tools were two advancements that allowed better integration between developers and designers, at the point when software began to offer the smoother, more ergonomic experience we now expect from it. As Cooper recommended in 1999:

“High technology can go either way. It’s the people who administer it who dictate the effect…we have to revamp our development methodology so that the humans who ultimately use them are the primary focus…. we must turn the responsibility for design over to trained interaction designers.”[17]

By the millennium, suggested Gribbons, a steady transformation could be seen across digital markets “from the user bending to the system to the system bending to the user”.[18] New technology companies began to follow a trend perhaps initially most prominently exemplified by Apple, who released the first commercially successful computer with a graphical user interface[19], towards user centeredness. As companies invested in user experience, they began to recognise its yield as being a vast, new market power. Proceeding to set the tone for mainstream digital experience, and seeing their own user-centric methods become standards in their own rights, a new set of platform-based technology companies emerged as software became increasingly integral to its user’s daily lives. Today’s most successful consumer technology companies have come to be collectively referred to as ‘The Big Five’[20]:  Apple, Amazon, Google, Facebook and Microsoft. New Media Theorist Lev Manovich describes our relationship with software today, becoming “our interface to the world, to others, to our memory and our imagination – a universal language through which the world speaks, and a universal engine on which the world runs”.[21]

In 2018, much of our digital infrastructure revolves around user insight-driven systems. The broadening scope for commercial entities to observe user behaviour during even the most nuanced moments of their traversal through digital space places focus on real-time understanding of individual users with poignancy. This focus is quite unlike that which Cooper imagined when introducing the persona 20 years ago, idealizing the process of “designing for just one person”. Historic methodologies like Dreyfuss’ proto-personas were characteristically diagnostic in nature, in the fact that they were designed to be used at an initial stage of product development. Today’s scope for data to be obtained in real-time, building elaborate images of individual users creates a different type of persona altogether, as Bratton suggests, “the ratio between number of users and possible personas is then trending towards a 1:1 ratio; your profile is your persona”.[22]

In order to obtain such data, a rich foundation of technical functionality is required beneath every interaction made within a platformThe power afforded to the most successful technology companies, through their greater computational capacity is potentially dangerous, as implied by Emily Rosamond: “by reconstructing provenance, those who practice big data analytics wield advantage over competitors with less powerful computational abilities; they also invent a new master’s discourse […] claiming to discern the “truth” about their subjects”[23].The most successful technology companies are virtually unregulatable on a country-by-country basis due to regulator’s struggle to keep up with the pace of technological change. Matt Ward highlights: “Anti-Trust laws are dated. They define monopolies not by their power and control, but by negative impacts (specifically price gouging) experienced by customers.[24]

Zygmunt Bauman uses the term Liquid Modernity to describe a condition where power is free to flow, and barriers, fences, borders and checkpoints nuisances to be overcome or circumvented”[25]. The significance of frontrunner technology company’s capacity to sustain their success through their computational ability indicates a shift towards more fluid states of power, with Bauman and David Lyon subsequently describing ‘liquid surveillance’ and its inherent challenge to traditional regulation: “power now exists in global and extraterritorial space, but politics, which once linked individual and public interest, remains local, unable to act at the planetary level. Without political control, power becomes a source of great uncertainty, while politics seems irrelevant to many people’s life problems and fears”[26].

The intention of this section was to offer an aerial overview of the relationship between technology company’s adoption of user centered methods, their successes, and their power. Moving forward, I will begin to zoom in, with the aim of engaging with the locations and distances between technology companies, their employees, products and users.

 

3.0 The Appeal of the Fettered

 

As user centeredness and ethnographic practice became more widely adopted in the digital sphere, increasing numbers of anthropologists, psychologists, ethnographers, and other social scientists have been required to contribute towards the development of its language and methodologies through social research. With resource abundant within the private sector, roles within technology companies and startups commonly offer substantially higher salaries than aca­demic placements. Some indication of this can be found by considering the disparity between the current average salary of a UK-based User Experience Researcher at Facebook, reported as £81934[27], and the current advertised rate for a Postdoctoral Research Associate within Oxford University’s Experimental Psychology Department, of between £31,604 and £38833[28]. With the security private sector companies and startups can provide offering an enticing alternative to academic placements, social scientists have begun to migrate towards the non-academic, even in some cases despite reservations about this, or it “historically seeming inplausable”[29]. As Thomas Wendt summarizes, “there are always times of revenue slumps where one must weigh the benefit of not taking a project with the cost of not making rent”[30].

The reasons behind this shift go far beyond the beckoning of career security. For fledgling social scientists to operate within these territories provides them with the power to operate as advocates for inclusion[31], working as intermediaries between business needs and end users. Just as Cooper’s personas contributed towards the Web becoming fundamentally less chaotic for its users, social scientist’s adoption of the massive-scale social platforms they work through as research environments has led to some inventive and broadly positive results. One example is Microsoft’s retrospective study of 64 million users of Bing’s search engine, which found that by analysing large quantities of search queries, it may be possible to identify users who are suffering from pancreatic cancer, often pre-diagnosis.[32]

Together, a private sector equipped with abundance of resource and the potentials afforded by social platforms as research environments, provide the opportunity to conduct influential, studies. Major platforms, from Facebook, to Buzzfeed, to the BBC, conduct thousands of tests within their networks every day, with Zuckerberg stating recently that 10,000 iterations of Facebook networks may be running at any one time[33]. Even in a standard business-as-usual state, a major platform generates billions of measurable use insights through data analysis. Netflix, a famously data-driven service, processed 500 billion daily events, such as video viewings and UI activities in 2016[34]. As elaborated on by Michael Schrage, insights around choice, preference, bias, affinity, creativity and decision are particularly retrievable from these networks. Schrage highlights this capacity to argue that industry is therefore better positioned to design and deploy replicable, rigorous experiments than universities[35].

It has been recognized however that research quality can be affected by the context it is undertaken in.  As recommended by pioneer of information technology Vannevar Bush in 1945, providing autonomy to researchers over the focus and style of their studies can produce better results:

“scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity”.

Research that fits this description has come to be known as ‘unfettered’, or simply ‘curiosity-driven’.

Three points of consideration may cast doubt over the possibility that a culture of ‘free play’ true to Bush’ recommendations can exist for private sector researchers: depth of focus, time scale, and area of focus. The volume of discreet use events platforms process daily require focus on the marginal to tight time scales, and a majority of research is likely to be conducted with the primary aim of fulfilling business needs over discipline-related needs, as deftly summarized by David Auerbach, “Facebook isn’t researching computer science, it’s researching itself”[36]. A decline of unfettered research for these reasons was recognised by Jon Gertner:

“In business, progress would not be won through a stupendous leap or advance, it would be won through a series of short sprints, all run with a narrow track.”[37]

Despite Bush’ visionary capacity to imagine the “exceeding speed and flexibility” of future memory supplementing devices[38], it is not possible for his post-war perspective to remain wholly applicable to the current research landscape or the particulars of technological advancement today. Nonetheless, the conflicts of interest reported on by some social scientists working in the contemporary industry point towards the value of curiosity-driven research practice. In a recent article, Anne Mcclard reflects upon her own experience as a young ethnographer working for Intel. Despite acknowledging that technology is poised to solve many of the Earth’s problems, that the consistent feeling of being on the edge of a breakthrough was always exciting, she speaks of the constraints working in a large technology company can present:: “stakeholders may not readily understand the value of taking an ethnographic approach to uncovering opportunities beyond the margins of their immediate questions, and they frequently do not consider the broader social implications of their innovations paths”, stating that she spent much of her early career believing her job was to deliver value to the company, given those constraints[39].

In ‘Radical Design and Radical Sustainability’, Thomas Wendt defines a radical approach to design as one that “specifically aims to uncover route causes and original sources, as opposed to surface level explanations”[40], which is synonymous with the possibilities offered by unfettered research. Returning to Lanier’s notion of ‘digital lock-in’, we must consider the long-term ramifications of prioritizing short-term goals for user experience research. At a point in history where a causal relation can be seen between technological change and issues relating to privacy, autonomy, and sociopolitical turbulence, maintaining a scientific tradition of unfettered research may create a better environment for social scientists to radically problem solve, and feel accountable for the legacy of their own work in doing so. As

Dijkgraaf suggested: “to tap into the full potential of human intellect and imagination, we need to balance short-term expectations with long-term investment”[41].

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.0 Human Factors?

For one week in 2012, researchers from Cornell University in conjunction with Facebook’s Core Data Science team used the social network to conduct an emotional contagion experiment that was at its time considered the largest psychological experiment to date[42], with a total sample size of 689, 003 users. The massive-scale experiment manipulated the News Feeds of a randomly selected proportion of Facebook users, using a linguistic analysis tool to control the emotional parameters of posts from friends visible on user’s News Feeds. The results published two years later, ultimately proved the experiment’s hypothesis that social networks can propagate positive and negative feelings[43].

Perhaps partly due to the fact that the project’s findings were circulated just over a year after Edward Snowden’s first disclosures[44], and therefore to a newly informed and increasingly privacy-conscious online public, the experiment led by Facebook Data Scientist Adam Kramer was received by academics, Facebook users and the media alike with a shared sense of foreboding, and is regarded as highly controversial. The experiment raised concerns about emerging ethical standards for corporate research using digital data[45], epitomized by Cornell in the statement justifying why the experiment had not been moderated by an independent review body: during the experiment “they were not directly engaged in human research.”[46]

An interview with Kramer was published by Facebook in an internal marketing piece in 2012[47] (at this point, the experiment was complete, but its findings not yet publically circulated). It illuminatingly contextualises his position at the time as an early-career social scientist. When asked why he joined Facebook, Kramer responds, “Facebook data constitutes the largest field study in the history of the world. Being able to ask – and answer – questions about the world in general is very, very exciting to me. At Facebook, my research is also immediately useful. […] In an academic position I would have to wait to have a paper accepted, wait for publication…” His answers could be seen as symptomatic of the specimen-like approach towards users employed by researchers as a result of Detached Intimacy.

Facebook have since implemented a long-term review of their research process[48]. It has been suggested that the public outcry prompted by the experiment led the image-conscious corporation in the throes of quantities of other contentious affairs to cease publically circulating research projects as ambitious as this one[49]. Perhaps it could be argued that if the experiment had been published today, it would have conversely been received as a constructive move by a public eager to decode the platform’s relationship with turbulence and transgressive behaviour.

While the cultivation of more social scientists into the ranks of platform development teams is an approach required to step into the future with some perspective and foresight, there is an argument that the language of UX isn’t yet established enough to implement without its meaning becoming obscured in live practice. Considering once again the way Wendt perceived the importance of the word ‘radical’, there may be an argument that the business models of the corporate structures investing in the development of the language of aren’t ‘radical’[50] enough for their own practices to be meaningfully influenced by the radical co-option of human-centered executives. Amirebrahmi writes compellingly from his own perspective as an ethnographer with experience working in industry:

(User Experience became a means by which)“thecorporationcoulddistilltheoutsideworld,purifyit,andmakeitultimatelylooknothingliketheoutside,butinstead,justlikeitsinternalself.[51]

If social scientists are increasingly involved with digital product development, but much of their work is ultimately filtered through the lens of a traditional business model, resulting only in insight towards increasing use, increasing uptake and increasing immersion, the result isn’t a product concerned with being truly useful to a user, it is a product concerned with feeling unavoidably necessary to a user, which is merely persuasive design.

“So, you didn’t ever stop and think ‘actually, this is people’s personal information and we’re taking it, and we’re using it in ways that they don’t understand’? You didn’t think ‘actually, I’m not sure about this’?”

“The company…we didn’t do a good job at due diligence, so no, we didn’t.

“But what about you? So not just the company, you’re involvement in it?”

“No… We were solely focused on getting this data and doing this experiment.”

Earlier this year, whistleblower and former director of research at political consulting firm Cambridge Analytica Christopher Wylie went on record regarding his own experience as the designer of Cambridge Analytica’s psychological warfare weapons that used third-party quiz application ‘This Is Your Digital Life’  to harvest up to 87 million Facebook user’s data, exploiting the platform to expose them to personalized propaganda. The project is thought to have profoundly influenced political feeling in the midst of two seminal democratic votes of 2016, with Steve Bannon and Leave.EU clients of Cambridge Analytica.

In an interview offered as part of the Guardian’s initial public disclosures, Wylie provided a semi-comprehensive overview of the chain of events that led up to the experiment and its deployment, expressing a degree of regret over his involvement. Towards the end of the exchange, he is asked whether he had ethical reservations about working on the project at the time that led him to consider terminating his involvement (this excerpt is transcribed above). Pressed repeatedly by the interviewer on his own position, he is only able to reply in collective terms, speaking on behalf of the company, and the company’s due diligence policy. Despite playing an essential role in the operation, he cannot reflect upon his own involvement in singular terms. It almost feels like an affliction.

It is interesting to compare Wylie’s reluctance to demonstrate accountability in the singular with the specimen-like approach to research subjects Kramer exhibited following the Mood Contagion experiment. This may prompt the question, it something inherent of super-scale digital research, or something about the specific culture of these working environments that cause moral detachment?

Bauman uses the term ‘adiaphorization’[52] to describe a detaching of systems and processes from morality .He may argue for the former, that adiaphorization is an inevitable result of technology producers working with personal data, or “information that proxies for the person”[53]. He and Lyon takes the stance that the practice of not directly engaging with human subjects hinders technology producer’s relational capacity because the very act of using information as a proxy in the first place “reduces the humanity of the categorized”[54].  This may be supported by the fact that Kramer and Wylie’s working environments at Facebook and Cambridge Analytica would have been slightly different, despite the pair being united by their detached approach to research.

On the other hand, Thomas Wendt reminds us that it is often neoliberalism that plays a role in divorcing individual action from systemic effects[55], thus diminishing the individual’s capacity to evaluate risk. Suchman attributes this detatchment to corporate culture itself[56], describing the layers of mediation between technology producers and the consequences of their work, the culture serving as an extension of the patriarchal family, its employees in the roles of children[57].

If we were to employ a family analogy then it would make sense to see a mirroring of the attitudes of figures like Wylie and Kramer further up the chain, towards management. In other words, if Wylie validates his own involvement in the Cambridge Analytica experiments by placing it in the context of the company goal (“we were solely focused on getting this data”), then his perspective encapsulates the management’s position, offering a new way to engage with that.

  1.        Benevolent Growth

Shortly after the first of 2018’s exposures concerning Cambridge Analytica and Facebook, an internal memo titled ‘The Ugly’ was leaked, authored by Facebook Vice President Andrew Bosworth[58]. Bosworth mentions a number of “questionable” growth techniques the platform uses, suggesting that any harm caused along the way is justified as part of the company goal of connecting people. Facebook’s absolute belief in the value of connecting people, which Fred Turner imagines as a type of “Connectionist” politics is typified in the memo. Such belief in connection at all costs suggests Facebook sees itself as a benevolent operation, an objective force for good, other to traditional corporations whose growth agenda’s may not align with their customers.

Wylie and Kramer’s ‘adiaphorization’ could be attributed to this aspect of culture within technology companies. One ramification of viewing corporate growth as benevolent may be a failure to compute or assess risk, and obstacles preventing growth as barriers that need to be broken. In Liquid Modernity, Bauman reflects on changes in the discourse around ‘risk’, which have tended to be displaced and replaced by discourse of ‘collateral damage’ or ‘collateral casualties’[59].

In February 2017, following Donald Trump’s election victory, and the subsequent high-profile discussion of Facebook’s tendency to proliferate ‘fake news’[60]. The company was forced to changed its mission statement through an open letter published by Zuckerberg using the platform[61]. From “To give people the power to share and make the world more open and connected”[62], to “to develop the social infrastructure to give people the power to build a global community that works for all of us[63], we see a mission statement transformed, the wording becoming far more tentative, arguably clunkier, and a clear attempt from Facebook to push the sense of responsibility that comes with building a functioning global community towards its users, away from its management, placing the company in the role of a mere arbitrator, which is possibly more realistic.

One line of Zuckerberg’s open letter, perhaps written in defence of the suitability of Facebook’s original mission statement at its time of writing, reveals an enlightening sense of the founder’s own vision: “Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial.” For Zuckerberg to so confidently declare this stance uncontroversial in 2004 may allude to either a reluctance to acknowledge, or lack of awareness of opposition to Silicon Valley-style utopic Connectionist[64] thinking, which originated in the communes of 1960’s America.

As Moira Weigel points out, Silicon Valley has had a long-term love affair with the word ‘tool[65]’, and Facebook is no different, with Zuckerberg referring to the platform as a ‘tool’ eleven times during his testimony to congress earlier this year[66]. The trend began in the 1960s, within the community surrounding Stewart Brand’s Whole Earth Catalog. Described by Steve Jobs as “sort of like Google in paperback form, 35 years earlier”[67], Brand’s intent for the Whole Earth Catalog was to connect the 8-10 million Americans living on communes at the time[68], who through it could recommend ‘tools to take back to the land’.

The values of San Francisco New Communalists[69] as described as Fred Turner, were consciousness, mandatory love, technology, shared mindset, and the deeming of politics to be ‘bankrupt’[70], but the group,  failed to sustain their mode of living, with most communes having disbanded by the 1980s and 1990s[71], prompting a shift towards computing and the creation of The WELL (The World ‘Lectronic Link) in the hope of reviving the mission elsewhere[72]. Welchman has attributed the failure of the New Communalists to their earnestness and tendency to moralise on conviction:

(To moralise in this way assumes that:) “conviction is based on a claim to truth, the sense of having a shared insight into what humans are and how they should live, with a concomitant apolitical or anti-political sense that we (who share this conviction) are right and those who do not are wrong”[73]

It makes sense for Facebook to rationalise itself as a benevolent entity that needs to grow, when this attitude is seen as part of a rich tradition in California.  In Media Giants, Birkinbine discusses the language of engaging, connecting and sharing the platform revolves around, arguing that this is undialectical and one-dimensional: “the discourse […] only views social media positively and is inherently technological-deterministic. It assumes that social media technologies have positive effects, and it disregards the power structures and asymmetries into which it is embedded”[74].

New Communalists, as Turner suggests, saw themselves as apolitical, although paradoxically, communes were “extremely conservative places by and large, immensely straight, immensely white”[75]. They imagined a world without politics, within which hierarchy was completely dissolved[76], the very ‘Connectionist’ thinking we see within both iterations of Zuckerberg’s mission statement.

There is argument that such a dissolving of hierarchy, and therefore absence of politics. cannot actually exist in practice, especially in a designed environment. Jo Freeman argues in The Tyranny of Stucturelessness that there is no such thing as a structureless group,[77] only formally and informally structured groups. Removing formal structure from a group only allows hegemony to be more easily established and less easily contested. If it could be argued that Silicon Valley-style Utopianism was tried and tested in live practice through the communes of the 1960s, then it is unlikely to be sustainable on a larger scale. As Nagle suggests, in Kill All Normies:

“The principle-free idea of counterculture did not go away, it has just become the style of the new right”.[78]

  1.       Loopholes

Despite the majority of the online public being practiced users of the Internet, for the fact that they use it regularly, understanding of elements like the Facebook News Feed and Google Search are limited, which is compellingly demonstrated in HCI research promoting the benefits of further investigation into the user view of, or “folk theories” into how underlying systems work. Eslami investigated folk theories into how Facebook News Feed’s algorithms work[79], finding a common one to be “The Personal Engagement Theory”, whereby users felt that by changing the way they behaved on Facebook, they were able to manipulate the feed into making friend’s posts appear more regularly: “you could just prod the algorithm along”.

The popularity of third-party ‘Who Unfollowed Me’ applications, which offer to retrieve ‘hidden’ information from social platforms like Facebook and Instagram about recent ‘deletes’ or ‘unfollows’ for the self-conscious social media user, demonstrates once again that users of social platforms will actively seek routes towards their goals when underlying systems present limitations. Despite many ‘Who Unfollowed Me’ applications being exposed as phishing schemes that purport to reveal information in return for access to personal data, but cannot actually provide the information and will then harvest the data[80], there is still a demand for such applications[81] .

Through developing folk theories about social platforms, or using ‘Who Unfollowed Me’ applications in an effort to tap into a social platform’s backend, users are recognising the limitations of systems and attempting to circumvent them, but cannot explicitly do so. Through a third example, we can see a group of users once again recognise the limitations of a system, but then develop a method to actively exploit this to their own benefit.

YouTube’s ‘reply girls’ phenomenon began in 2012, when a change to YouTube’s relevance algorithm gave significant weighting to video responses, or ‘replies’ to existing videos. Users known as ‘reply girls’ discovered that by responding to popular videos, their own content could reach similar sized audiences, and therefore make significant revenue. Perhaps taking advantage of the fact that men spend more time on the website, 44% more in 2015[82], reply girls would often select a ‘suggestive’ thumbnail for the response, to encourage other users to click their links.[83] YouTube’s algorithm viewed a “dislike” on a video to be legitimate engagement, so the videos would only be ranked more highly when viewers attempted to protest against them, meaning the women would often make “pointless” videos in an attempt to antagonize further[84].

It is interesting to compare the responses (or non-responses) of the technology companies concerned with ‘Who Unfollowed Me’ applications and the ‘reply girls’ phenomenon. In the case of “Who Unfollowed Me” applications, despite clear demand for access to information about ‘deleters’, this information is still not accessible on Facebook or Instagram. A study has proven that social media users who feel insecure about their relationships are more active on the networks[85]. For these platforms, user activity creates user data, which is sold and generates economic value. Therefore, it is possible that platform users would share less if Facebook and Instagram presented users with transparent information about social performance, easing insecurity, and having a detrimental effect on the companies. Conversely, within a matter of weeks of the emergence of ‘reply girls’, YouTube had made amendments to its algorithm, giving priority to videos watched from beginning to end over number of partial views, successfully ending the saga[86]. YouTube may have responded to the ‘reply girls’ phenomenon quickly because of outcry from other viewers, but the group had also  recognised a loophole in their algorithm that it would benefit YouTube as a company to amend. This comparison may demonstrate technology company’s reluctance to listen to users in instances where user demand cannot be framed with business agenda.

  1.       Opaque Experience

In physical space, say a room, we would reasonably expect any adult visitor to be able to develop a basic mental model of how the room is decorated and what it contains after entering. They may also consider how the room smells, how its surfaces feel to touch, or which part of the room they stand in, in relation to the other parts. If they spend a lot of time in the room, they might begin to theorise about reconfiguring certain properties of it, perhaps changing the temperature, removing a piece of furniture, or redecorating.

An information environment may be considered transparent or ‘seamful’ if a non- specialist visitor were able to understand and implement changes to this level, with HCI researcher Motahare Eslami usefully analogising a ‘seamful’ interface as one that will “convey the patchwork of wireless signals and access points while highlighting signal strength and the boundaries of different networks”[87]   Wikipedia then, manifests many characteristics of a seamful information experience. Through its notably bare-bones aesthetic, its functionality is plainly visible, promoting user education of the systems underpinning its interface, and encouraging its users to travel deeply within these systems, making revisions as they go.

The question of whether transparent or ‘seamful’, or opaque, seamless experiences are more broadly appropriate in the context of digital information is one that can be traced back to early discussion of the very nature of electronic information and how it should be conceived within the field of cybernetics by communication theorists attending the post-war Macy Conferences. While one participant, neuroscientist Donald MacKay, contended that information needed to maintain a demonstrable association with meaning in order to remain existing as information, the prevailing idea was Claude Shannon’s – that information should be considered as an entity cut loose from its human and meaningful origins, primarily for the fact that signals were measurable and quantifiable, but meaning wasn’t[88] .

Mark Weiser, who coined the phrase ubiquitous computing, saw seamlessness as one of it’s fundamental principles. A particularly influential quote of Weiser’s, and one that is often used in the context of seamless or screenless interfaces, states that “a good tool is an invisible tool. By Invisible, I mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”[89] In this context, invisibility can either mean to not be seen because it is hidden, or to not be seen because it is accepted, or commonplace. In a later article, Weiser described the way he imagined computing would vanish into the background as a fundamental consequence of human psychology: “when people learn something sufficiently well, they cease to be aware of it.”[90], relating this to philosopher Michael Polanyi’s notion of the “tacit dimension”[91]. Paul Dourish elaborates further on this process with the compelling analogy “if somebody tells you ‘I saw an accident on the way to the bank office” it is the accident that you enlighten. You do not reflect upon the matter that the person was on his way to the bank, that is something ordinary and will therefore not be questioned.[92]

This view towards a less visibly routed user experience of information in the technological realm is now widely accepted as best fitting for mainstream purposes, partially for its obvious benefit of reducing cognitive load. With a public increasingly comfortable with screenless interfaces like the Amazon Echo and Google Home (3/4 of Americans are expected to have at least one of these devices by 2020[93]), technology producers looking to streamline their user journeys further by eliminating ‘friction’ see a new generation of “zero UI products[94]” as a desirable next step. Hilary Stephenson endorses this direction for innovation: “Through these simpler, more intuitive commands, technology will no longer act as a barrier between humans and the real world, but rather as something in the background people can interact with while carrying out day-to-day activities such as working, driving or socializing.[95]” Google’s CEO Eric Schmidt takes this one step further, by claiming users don’t just want Google to aggregate relevant results, but for Google to actually tell them what to do:

“I actually think most people don’t want Google to answer their questions, they want Google to tell them what they should be doing next[96]

Reduction of cognitive load isn’t the only reason why social platforms may find opaque interfaces preferable. Influential technology products moving towards increasingly opaque, black box interface also results in an intentional lack of understanding in users of the underlying idiosyncrasies of platforms and, as argued by Oscar Wenneling, of the “limits and constraints that a certain technology has”.[97]

As we have established, today’s emerging standards for UX, are created in states of partial blindless, which have inbuilt limitations and constraints routed in their creator’s situated and therefore partial perspectives.  Perhaps in presenting today’s interfaces as opaque, and therefore narrowing the range of ways a user can effect a system, opportunities are being missed for technology producers to eliminate the effects of Detached Intimacy, narrowing the distance between their work and its social implications. Wendt believes that experience design processes could be more inclusive: “design loves to talk about collaboration, but it often lacks a sense of communalism. […] Communalist design not only provides a voice for those usually left out, it also works as a mediation point for the ethical responsibility of design”.[98]

In  1994, Chalmers and Dourish introduced the concept of social navigation[99]. Using the metaphor of a woodland path, they imagined an environment where paths from one place would be created, or cease to exist dependent on regular use.  In the context of contemporary social platforms, social navigation might allow use to carve their own uses of the interfaces that were gradually adopted formally. This could be considered an intermediary point between the black box and seamless interface, and a route towards reframing objectivity in technology production. Suchman wrote on technology producer’s and the value they see in their professional skill:

(An assumption underwriting the boundary between those employed in technological production and the sites where it is received is) “the premise that technical expertise is not only a necessary, but is the sufficient form of knowledge for the production of new technologies.” [100] Lucy Suchman

“Although it is true that maps do not fully capture terrains, they are powerful technologies” (Becker 1986)

Which means there is still hope for digital information, decontextualized, to remain a powerful tool and something we can usefully experience. (This does not mean information, decontextualized, should serve to replace our previous understanding of information, as maps could not replace terrains)

 

“Users ‘construct’ technology; they do this both symbolically, in their ‘reading of artefacts, and literally, in the articulation work that is essential before a complete configuration of artefacts…can serve as an adequate day-by-day supporting structure for a live practice.55”

For everyone to have the opportunity to be involved in a given group and to participate in its activities the structure must be explicit, not implicit. – somewhere this chapter could go, is moving on from this discussion about opaque interfaces and systems, and talking about sense of hierarchy and heterarchy, and the extent to which this structure is understandable to a user. Tech leaders talk a lot about groups, but are they allowing groups to naturally form?

Freidrich Kittler saw technological development as an autonomous, self-catalyzing process, with new innovations following each other in a “rhythm of escalating strategic answers.”[101] Considering the future legacy of our design and research, it may be w

Drip down effect on ux from psychology – terms like ‘cognitive recognition’ being shared and obscured – is this ok?

  1. Conclusion

Return to infrastructure inversion. Events like the rise of the alt right are not disruptive events, they are outcomes, they are reactions. Disrupt alludes to a momentary interruption, not a warning sign. Nothing is steady enough to be disrupted, especially at this point.

>>> Concluding paragraph should probably be about a recognition of the newness of everything helping us make sustainable decisions

P.49 Blindness (constructing a world in which exclusion could occur, but not seeing what is excluded)

“This blindness occurs by changing the world such that the system’s description of reality becomes true.

Thus, for example, consider the case where al diseases are classified purely physiologically. Systems of medical observation and treatment are set up purely and simply from a chemical imbalance on the brain. It will be impossible to think or aact otherwise. We have called this the principle of convergence. (star, bowker and neumann in press) sorting things out

“Reality is ‘that which resists’ according to Latour’s (1987) Pragmatist-inspured definition. The resistances that designers and users encounter will change the ubiquitous networks of classifications and standards. Although  convergence may appear at times to create an inescapable cycle of feedback and verification, the very multiplicity of people, things and processes involved mean that they are never locked in for all time”

autonomy can be compromised even if freedom is not: a life lived passively, without active choices being made, is not an autonomous one. And it fails to be autonomous no matter how elaborate one’s freedom of choice is, or how developed her mental capacities for rational deliberation

endt takes the stance that design is inherently political, and that for designers to be political

“We need to do good, and we also need to help mitigate unintended negative consequences of the technologies we help to develop. We must not shy away from expressing points of view on any of the tough topics of our time—from poverty, gender, and environment, to privacy or corporate hegemony.” – we need to be political, we need to be critical, not utopian, to design sustainable tools

 

“The term ‘users’ contains an assumption about how to understand people, as if people were simply living organisms that use things, living conduits of requirements for products.

The term ‘usage model’  is asked to account for the objective specification of activities that are important to people, often manifested in categories as vast as ‘entertainment’ or productivity, as if these terms signified in nuanced and singular ways.”[102] Maria Bezaitis

A solid example I have yet to fully clarify will come next about ambiguous business language infiltrating the practice of social scientists employed within CDSPs, being an indicator of the limitations of progress in these territories, with the exhaust of this being a specimen-like approach to users within these companies – I would like to specifically explore phrase like “pain points” and “points of friction”. Maria Bezaitis’ interrogation of the topic is proving a useful starting point: “the term “users” contains an assumption about how to understand people, as if people were simply organisms that use things, living conduits of requirements for products. The term “usage model” is asked to account for the objective specification of activities that are important to people, often manifested in categories as vast as “entertainment” or “productivity”, as if these terms signified in nuanced and singular ways”31 Ellen Lupton in Beautiful Users talks about “points of friction between people and devices”. She is talking about goal-oriented design in the context of 20th century products, but what if contemporary points of friction are more complicated than the use of the phrase allows?

 


[1] Howard Saul Becker, Art Worlds (Berkeley, Calif: University of California Press, 1982).

[2] Bowker, Geoffrey C, and Susan Leigh Star, Sorting Things Out (Cambridge, Mass.: MIT Press, 2008)

[3] Bowker and Star, Sorting Things Out, p.49

[4] Jaron Lanier, You Are Not A Gadget (New York: Alfred A. Knopf, 2010).

[5]

[6]

[7]

[8]

[9]

[11] https://uxmag.com/articles/designing-objectively

[12] Suchman, Located Accountabilities In Technology Production, p.3

[13] Suchman, Located Accountabilities In Technology Production, p.5

[14] Henry Dreyfuss. Designing For People (Allworth Press, 1955) p.27-43

[15] Craig Tomashoff, “Networks Rely On Audience Research To Choose Shows”, Nytimes.Com, 2012 http://nytimes.com/2012/05/13/arts/television/networks-rely-on-audience-research-to-choose-shows.html [Accessed 14 March 2018]

[16] 18  Alan Cooper, The Inmates Are Running The Asylum (Indianapolis, Indiana: Sams, 1999).

[17] 19  Cooper, The Inmates Are Running The Asylum, p.155

[18] William Gribbons, “The Four Waves Of User-Centered Design | UX Magazine“, 2013

<https://uxmag.com/articles/the-four-waves-of-user-centered-design> [Accessed 15 March 2018].

[19] https://www.wired.com/2010/01/0119apple-unveils-lisa/

[22] 20 Benjamin H Bratton, The Stack: On Software And Sovereignty (Cambridge, Mass: MIT Press, 2016).

[23] /.

[29] the real problem: rhetorics of knowing in corporate academic research

[82] https://digiday.com/media/demographics-youtube-5-charts/

[83] https://www.huffingtonpost.co.uk/entry/alejandra-gaitain-and-you_n_1328195

[84] http://gawker.com/5889759/weird-internets-how-thereplygirls-breasts-earned-her-youtube-death-threats

[85] https://www.sciencedirect.com/science/article/pii/S0191886914007247

[86] https://books.google.co.uk/books?id=8P1cDwAAQBAJ&pg=PT168&lpg=PT168&dq=reply+girls+youtube&source=bl&ots=GxKm-kDq_W&sig=MhBAW9w2KnnbJtmx0_OLcdUy5hI&hl=en&sa=X&ved=0ahUKEwi2tpqfwuXbAhWIXMAKHajDBJk4ChDoAQhQMAc#v=onepage&q=reply%20girls%20youtube&f=false

[87] .

[101] (Kittler, 1999: 121).

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: