Determining if Commentary on Online News Sites Matter Through Exemplification and Agenda Setting
A defining characteristic of news in the 21st century has been the move toward online consumption of news. A 2010 Pew Research study found that the Internet is the third most popular news platform, behind local television news and national television news (Purcell, Raine, Mitchell, Rosenstiel, & Olmstead). This equates to 71% of American adults saying they consume news online at least occasionally, which is 94% of all Internet users. As digital natives continue to grow older, this number could likely increase.
Online news consumption also contributes to news in the 21st century becoming more social and participatory. On the social side, more than 8 in 10 online news consumers participate in social activities such as sharing, swapping, emailing or posting stories, as well as participating in discussion threads (Purcell et al., 2010). Getting news is an important social act, with 72% of Americans surveyed saying they followed news because they enjoy talking about it with others and 75% reporting news being forwarded to them. Of the respondents, 69% keep up with news as a social or civic obligation.
Other forms of participation include allowing online news readers to become not only consumers, but also distributors by sharing stories via email or posting on social media platforms. Many newspapers nowadays also allow readers to post comment at the end of online news and opinion stories. Most work about these commentary sections fall along the lines of content analysis. Perhaps one of the most well-known of the explorations in the United States is known as the “Engaging News Project” led by Natalie Stroud at the University of Texas at Austin.
Stroud (2016) and a team of researchers analyzed nine million comments to stories published on The New York Times website. These researchers found slightly more than half of their representative sample of 1,471 Americans read news comments or left comments, with 14% commenting on the news. The commenters tended to be younger and more educated than those who do not comment or who read comments. They also tended to be male, have lower levels of education and lower incomes compared to those who read comments but do not comment. Local newspaper and TV websites were the most popular places to comment. The top reason for commenting was to express an emotion or opinion (56%).
These findings were mirrored by Aschwanden (2016) who analyzed comments at left on FiveThrityEight.com and of survey participants. This group of initially more than 8,500 people found that 76% identified as male; more than 60% were under 40 years old and nearly a quarter commented daily. Motivation findings were also similar; 55% of respondents said they knew something that wasn’t in the article. The top three reasons for comments were social and participatory – to express an opinion, add additional information, or correct information. In interviews with German students (25), Ziegele, Breiner, Quiring 2014, said comments – from Facebook pages or website – are an essential component of online news stories
Much of this research has been content analyses that focused on comment tone, motivations why or why not people comment; or implications for news producers and gatherers (Diakopoulos & Naaman, 2011; Diakopoulos & Naaman, 2011b; Nager, 2009; Shanahan, 2010; Stroud et al., n.d. Thorson, Vraga, Ekdale (2010)). There is also some research about using software to predict things such as which types of stories will have the most comments or the political affiliation of a commenter (Diakopoulos, 2016).
But what effect might these comments have on the cognitive response of readers? Does the online participatory model that allows comments at the end of a news story mold or change perception of the news story message? Research suggests yes, that comments may affect reader perception of content in specific circumstances. This work proposes an online experiment to tease possible effects of comment sections of a policy news story in which the comment sections includes comments of similar viewpoint, and if any affect is dependent on topic relevance to the reader. The work will also examine if there is a relationship between these factors and on story credibility, engagement, open-mindedness and paying intent. This would be one of few studies to manipulate relevance in this context, and the first work we are aware of that extends this type of communication to the possible engagement and paying intent. If it is found that comments sections do effect reader perception of story content, news producers would have an incentive to take these spaces as seriously as they do original story content.
Crudely, exemplification theory posits that exemplars are mediated events that are capable of representing other instances in which one has no experience (Zillman, 1999). A classic example used by Zillman is that most people have never experienced a violent crime, but people have formed perceptions about it based on news reports or other people’s stories about them. Since there is no journalistic standard about best use of exemplars, when used in news coverage they can paint an incorrect perception of an issue by selecting a more extreme or dramatic example than what occurs most often for the purposes of storytelling. Research has shown that this type of characterization can occur with social problems, leading to a change in perception of the majority position and one’s personal opinion (Brosius & Bathelt, 1994); and that exemplars can have a more profound effect than general descriptions of the number of people or things involved in a given social phenomena (Zillman, Gibson, Sundar & Perkins, 1996).
It could be argued that a series of experiments undertaken by Iyengar (1994) were exemplars. Iyengar (1994) concluded that the net effect of specific television news frames – arguably exemplars because they are mediated, representative events that most people have no direct experience about – was in making elected officials and public institutions less accountable to the American public. He studied political frames that were episodic or thematic. Episodic frames focus on specific events or cases such as a report about a teenage drug user; while thematic frames put issues and events in some general context, such as changes in government spending on welfare. Episodic news frames tend to blame individuals in the conflict or situation, while thematic news frames tend to cause individuals to put responsibility at the societal level.
Iyengar studied these episodic and thematic television news frames in six specific areas – international terrorism, crime, poverty, unemployment, racial inequality and the Reagan administration’s Iran-Contra dealings – to determine how viewers assign responsibility in the origin and treatment of problems. He found most television news stories tended to be episodic; so this general effect was to cause viewers to attribute responsibility to individual victims or perpetrators rather than societal forces – thus supporting a proestablishment worldview. In a practical example, this meant attributing causes of poverty to individuals who are poor because they are lazy, instead of systematic issues in society that might contribute to the situation. Iyengar suggests that the shift in American public opinion in the 1980s toward hard policing and less government intervention in social welfare was due, in part, because poverty, crime and terrorism were depicted on television news in episodic frames, or exemplars. A few years later, Iyengar and Simon (1993) found that episodic framing of the Persian Gulf crisis led to increased viewer support of military means to end the conflict.
The majority of studies found in the literature using comments and exemplification theory examine news comments in terms of online rating systems or comments from social networking sites (Lee & Jang, 2010; Peter, Rossman, & Keyling, 2014; and Zerback & Fawzi, 2017); involve a news story that is slanted one way (Lee & Jang, 2010; Winter & Kramer, 2016) or concern a news story that is about a health, science or controversial topic (Lee & Jang, 2016; Winter & Kramer, 2016; Knobloch-Westerwick, Johnson, Silver, & Westerwick, (2015); Peter et. al., 2014; andZerback and Fawzi).
Other studies that employ different theories and examine internet comments have found that the valence of comments can influence thought about the original content idea in line with the comments. Similar to agreement, valence could arguable be seen as a type of exemplar. Walther, DeAndrea, Kim and Anthony (2010) looked at the valence of comments left on a Youtube PSA. In their work with college students they found that negative comments negatively affected the PSA and positive comments lead to more positive appraisals of the PSA. Konijn, Veldhuis, Plaisier (2013) found in a different study involving Youtube comments of adolescents that users evaluate content in line with their peer comments. Study of body perception of neutral model; made comments about weight in line with peers comments. In a more recent study about comment valence, Dohle (2017) looked at if positive or negative comments had an impact on the perceptions of the quality of journalism. While the results were not significant, descriptive results trended toward positive comments of an identical product being perceived in a more positively in terms of quality than people who read negative comments being judged as higher quality. In another online experiment of German students, (von Sikorski and Hänelt (2016), participants read a story about a scandalized individual, and were randomly assigned to groups with different valence of comments. They found that comments of mixed groups, which included positive and negative comments lead to lower perception of journalistic quality of the online news story. This study also showed that participants who read positive comments judged the scandalized actor more positively. Kümpel and Springer (2015) in an experiment involving German participants, readers judged news article related to comments they were shown – if comments praised impartiality and accuracy of article, then participants thought article was more impartial and neutral and accurate, and similarly on the negative side.
In the exemplification framework, Zillman, Perkins and Sundar (1992) found that exemplars tend overpower “pallid” information. As such, this would be the first study to our knowledge examining comments as exemplars of news stories about generic policy proposal presenting both sides of the issue. This would also be one of the few of this type based in the United States and with a non-student population.
As such, our first hypotheses would be as follows:
H1a: Individual online news comments act as exemplars that influence reader perception of a story issue, such that participants who read comments sections with a majority of exemplars in agreement with a proposed policy change will be more likely to have their own views in agreement with the policy change.
H1b: Individual online news comments act as exemplars that influence reader perception of a story issue, such that participants who read comments sections with a majority of exemplars in disagreement with a proposed policy change will be more likely to have their own views in disagreement with the policy change.
Relevance in Agenda Setting
Agenda setting is a theory that media coverage influences what the public views as important. By the media emphasizing specific issues, it encourages a specific train of thought and can lead audiences to particular conclusion (McCombs & Shaw, 1991; Price and Tewksbury, XXXX). Research suggest that this indirect method of persuasion can occur by how the news is presented, as well as by specific features of news (Price & Tewksbury, 2007) and that even the presentation of news online allows greater individual control over exposure and so sets a different agenda then print new coverage (Altaus & Tewksbury, 2002). McCombs and Stroud (2014) are among those that argue that relevance to a personal is a key ingredient in determining the strength of agenda setting effects.
Relevance in processing. Other scholars research relevance in news as well, albeit in a different manner. One of the most seminal works in this area is that is Graber (XXXX). The in-depth focus group work found that people use media cues to process information, but only when that information was personally relevant in the first place. Graber argued that personal relevance superseded the media’s power in telling people what to think or think about.
Other researchers have found that the depth of how information is processed may depend on how motivated, relevant or involved the content is to the audience (Lang, 2000; Fogg 2003; Metzger 2003; Hilligoss Rieh, 2007; Nueman, Just Crigler XXX; Graber XXXX; Chaiken and Maheswaran 1994; Cacippo, Petty, Stoltenberg, 1985; Bulkow, Urban, Schweiger, 2012; Winter and Kramer XXX; Winter, Brückner, Krämer (2015)]. The relevance in this discourse is about if and how heuristic cues are processed centrally and possibly automatically (Lang, 2000), which requires more effort, or processed by peripheral means, meaning not much effort is put into thinking about the information by the reader (Knobloch-Westerwick, Sharma and Hansen, and Alter, 2005) because it is not relevant or important or doesn’t involve them.
Many researchers in this frame argue that audience members, in some cases internet users (Metzger, Flanagin & Medders 2010), are more likely to use central or systematic processing to determine credibility of online information if they are motivated or in the case of political news, highly involved in politics (Price & Tewksbury, XXX). These audience members will also use peripheral or heuristic cues, or superficial analysis that is thoughtful when motivation is low due to irrelevance (Metzger 2003; Chaiken and Maheswaran 1994; Cacioppo, Petty, Stoltentberg 1985; Petty & Cappicoo 1981).
Cues in irrelevant topics. These and other theories of superficial, environmental cue processing due to irrelevance in regards to the audience members have been supported in studies of evaluating content on the web (e.g. Kim 2007, Cappico et. al., 1985) with some studies suggesting what some of the processing cues could be. Metzger, Flanagin and Medders (2010) found that the opinions of others could be such a cue. Their study found that internet users rely more greatly on information in social networks from peer-to-peer reviews and testimonials from consumer products to firsthand experiences, especially the presence of negative comments, to verify information needed offline such as products and health care. The authors concluded that people will rate information more credible if other do, without major consideration of the content itself. Lee and Jang (2010) found commentary was more of such a cue as compared to story approval ratings on controversial topics that participants were not personally involved in. In Winter and Kramer’s experiment (XXXX) that emphasized dangers of playing violent video games, participants uninvolved in the topic were easily persuaded by a small number of comments and ratings that disagreed with the one-sided argument; while those who the topic were highly relevant to did not change their personal opinion in light of the social cues of comments or ratings, even they changed their position on the general climate of the topic based on the comments and user ratings of other people.
This is in line with superficial processing of information: When the content is irrelevant, media consumers are more vulnerable to agenda setting effects when the ability or willingness to counter argue with the presentation of news is low (Iyengar, Peters, Kinder, 1982). In political news consumption and processing, this means those who are more politically involved and follow public affairs more closely – highly relevance – would be more likely to counter argue and have less of an agenda setting effect and less susceptible to outside cues for evaluating content. So by extension to the other side, those who find the content irrelevant may be more vulnerable to outside cues of news presentation to help set the agenda.
While much research presents information lower in relevance as being the most easily to influence by peripheral or superficial cues, Huseh, Yogeeswaran, Malinen (2015) found in a study of student volunteers in New Zealand involving a recent news story on a highly relevant issue, that participants own comments become more biased – both consciously and unconsciously – after reading prejudiced and antiprejudiced comments by others.
Agenda setting and cues. Pingree, Stoycheff (2013) developed a model that more actively combines agenda setting and information processing. They put forth agenda cueing, a theory with a supporting online experiment in which audience delegate the difficulty of prioritizing topics to journalists who give “agenda cues” as cognitive shortcuts in the agenda response.
Similar in regard to exemplification theory and comments, much of the work of comments, agenda setting and cues has been conducted on college students or involves social networks (e.g. Winter et. al., 2015. A recent experiment by Pjesivac, Geidner and Camerson (2018) found that college students rated comments from expert sources as more credible than comments from generic Twitter of Facebook in online news stories, but these comments were embedded in the story, which is not how most online news sites currently publish comments. In an experiment published last year, Waddell (2017) posits that negative comments are a threat to perceived journalist quality of news. Using 289 people form MTurk, the researcher looked at Twitter comments from New York Times twitter page, then number times stories tweeted/retweeted and liked, and then read 7-paragraph story about heroin addict. Results showed that negative comments decreased newsroom perception of article credibility and issue importance, by way of first-level agenda setting effect, the author suggests.
Based on the research about the importance of relevance and cues in agenda setting, and national news in the United States being a measure of relevance in the work of Graber, this work proposes the following hypothesis:
H2: That comments act as agenda setting cues that influence readers’ opinions based on the relevance of the story topic, comments about topics that are irrelevant to readers produce more of a significant difference in reader position in regard to the proposed policy found in the story than comments about topics that are relevant to readers.
Participants were recruited through Amazon Mechanical Turk (MTurk). Five hundred people over the age of 18 completed the experiment in April 2018. After removing data from people who declined their data inclusion at debriefing and eliminating certain participants who failed quality control measures (like extremely fast reading speeds), data from 467 adults remained in the study. Then, one standard deviation was removed, accounting for a sample of about 400 adults (N = 399). The average age of participants was about 43 years old (M = 1975, SD = 1.04). Participants were nearly split in those who identified as male (n = 198) and female (n = 201). Most had an associate’s degree (M = 4.40, SD = 1.7). The overwhelming majority of participants identified as white (83%, n = 333), with nearly 8% identifying as Hispanic (n = 33) or black (n = 30); and nearly 5% identifying as Asian (n = 19). The average household income of participants was between $40,000 to $49,000 per year (M = 5.96, SD 3.20). The experiment was done completely online and integrated into MTurk through Qualtrics software. Participants received 50 cents for participation.
Participants were told they were participating in a web-based experiment about online news coverage of political policies, and that there were quality control questions embedded in the project. Participants expressed consent to participate by clicking “I Agree” before the study started. The experiment was 2 (comments in agreement of disagreement with a policy change) x2 (relevant or irrelevant) between subjects factorial design repeated three times. Participants were randomly assigned to read one story in each of the three blocks that varied by story relevance and comment agreement with the proposed policy change. After reading each story, participants were answered questions regarding agreeing with the comments, engagement and paying intent. Then they completed demographic information and were debriefed. The entire procedure took an average of 11 minutes (M = 674.22 seconds, SD = 453.21), with the final sample time of less than 10 minutes (M = 590.72 seconds, SD = 206.85).
Three news stories written by a former reporter for a large metropolitan daily newspaper were created as stimuli. The stories concerned policy changes of increasing the minimum driving age to 18; imposing an excise tax on wholesale transactions of coffee beans and ground coffee, and requirement that all mobile devices come preloaded with a government app for emergency contacts. All the stories presented opposing viewpoints on the proposed change and were similar in length.
As a test of relevance, the stories only differed in location and the proper names of the reporters who purportedly wrote the story, and the names of people quoted in the stories. In the relevant conditions, the stories were set in the United States of America (Graber XXX). The headlines and bylines were as follows: “Bill Proposes Raising the Minimum Driving Age in U.S. to 18” By Stan Thompson; “Opposition Stirs Against Proposed Coffee Tax in U.S.” By Michelle Wang; and “Bill for Preloaded Government App on Mobile Devices in U.S. Set for Vote” By Roger Spruce. In the irrelevant conditions, the headlines and bylines were as follows: “Bill Proposes Raising the Minimum Driving Age in Belarus” to 18 by Max Ivanova; “Opposition Stirs Against Proposed Coffee Tax in Slovakia” by Dominika Balog; and “Bill for Preloaded Government App on Mobile Devices in Moldova Set for Vote” By Bogdan Lungu.
To test for comment agreement or disagreement, a total of 16 comments – 8 in agreement of the proposed change and 8 in disagreement – were crafted as similar to comments one might read in a real online news story. Each story had 10 comments that immediately followed each story, with 8 being of the same type of agreement and two that were of the opposite. The comments were randomly mixed in each grouping of 10, but each grouping of 10 was the same for each of the stories with the same relevance.
The 8 comments in agreement with the proposed change were a). I agree with this proposal! Glad to see that government folks are coming up with good ideas. b). Yes, I agree with this change – let’s do this! c). It’s about time our elected officials propose ideas like this that make things better for us. d). Finally – I’ve been waiting for the “yes” vote on this change! e). Yaaaaaasssss! f). This proposed change is awesome…pass this already! g). This change is the greatest – I can’t wait for it to pass! and h). This change is a great idea!
The 10 comments in disagreement with the proposed change were a). I completely disagree with this change! Only idiots come up with these new “rules”. b). This proposal is so stupid! I hope it doesn’t pass. c). This legislation is the dumbest thing ever! d). This change is stupid – vote “no”! e). Things are fine as they are – we don’t need this change! f). No! g). In total disagreement with this idea for a new law! and h). This change is not a national issue so it shouldn’t even be up for vote – I hope it fails miserably!
A manipulation check was performed to test if the main variables of relevance and agreement were successful. To test for relevance after participants read each story they answered on a 5-point Likert-like scale about the extent they agreed that “this proposed changed is relevant to me” and “this proposed changed affects me”. To test for comment agreement, study participants answered on a 5-point Likert-like scale about how much “Most people support this proposed changed” and “Most people are opposed to this proposed change.” All scales for the manipulation check went from 1 (not at all) to 5 (very much).
Policy change agreement. If participants agreed with the comments was a main effect. To measure the degree to which readers may have been influenced by comments in agreement or disagreement with the story, participants answered questions based on open-mindedness used by Thorson et. al. (2010). Participants marked on Likert-type scale to what extent the story and comments were “inconsistent with my view on the issue” and to what extent the story and comments were “antagonist toward my position on the issue.” The scale had 5 items that ranged from 1 (not at all) to 5 (very much), so the lower the number the more the participant was in line with comment agreement.
Credibility. Credibility was a dependent variable. In journalism studies, credibility is measured in multiple ways (Hilligoss Rieh, 2007). We measured credibility using a Likert-type scale on items using aspects of scales used in studies by Thorson et. al. and Mayo-Leshner (2000). Participants answered to what extent “This story is fair”; “This story is biased”; and “This story is accurate”. The scale had 5 items that ranged from 1 (not at all) to 5 (very much), so the lower the number the more the participant thought the story lacked credibility.
Engagement. Engagement was a dependent variable.To capture this, authors modified participation activities in Borah (2014) etc. to include activities that are likely in today’s climate on a Likert-type scale. The participants answered to the extent they would “share this news on Facebook” and that they would “Post this news on Facebook”. They also answered to what extent they would want to “leave a comment with my opinion in an online comment section below the story”; “attend a rally about the issue if there were one”; “sign an online petition such as Change.org to influence public policy about the issue if there were one”. For most questions the scale went from 1 (not at all) to 5 (very much); with a sixth level of not applicable added for two questions about possible Facebook activity. The lower the number, the less likely participants were to engage.
Paying intent. And lastly, based on work of Lee and Chyi (2013) paying intent was measured on a Likert-type scale. Participants answered to what extent they were “likely to pay for this news article” and that they were “willing to pay for this news article.” The scale had 5 items that ranged from 1 (not at all) to 5 (very much), so the lower the number the less likely participants were to pay for news.
This work sought to determine potential effects of reading comments that presented one viewpoint about a proposed policy change on issues that are not controversial on the same platform and space as a news article about the change (not from social media channels). This study examines these factors through the lens of exemplification theory and agenda setting. To our knowledge, this would be one of the first of such experiments using a broad section of the participants in the United States.
H1a and H1b state that if a comments sections has a majority of comments in agreement or disagreement with the proposed change then study participants would be more likely to support or oppose the policy change, respectively. If the hypothesis are indeed supported, it would provide evidence that comments can acts as exemplars, and so influence opinion about national policy matters covered in the media not based on the valence of the comments, but on the mere existence of them.
H2 examined the issue of story topic relevance and comments to an online news story. It proposes that for readers who found the proposed policy change irrelevant, the comments that either agreed or disagreed with the change would make a difference in the reader’s policy position than if readers found the story relevant to them. If the results show this evidence, it provides evidence that comments acts as agenda-setting media cues and have the most impact to participants on matters that they are unfamiliar with. This could imply that especially for matters that readers process superficially, the opinion of others in the form of comments can have greater influence about what to think.
If evidence for either hypothesis is not shown, it could support the work of Graber (XXX), which reached a conclusion that media and media cues − in this case comments − can influence what people give attention to, but only if the issue is relevant to them in the first place, so media is not dictating what people think or think about. Not finding support for these hypotheses could also add to research that suggests that media, and in this study specifically comments, likely reinforces opinions that people already hold.
One limitation of this experiment is that it was not designed to capture baseline information such as a control or existing opinions of participants before they completed the study. For example, Mutz (1997) found that when people see public opinion of others, they engage in self persuasion process in which their own opinion moves toward those of others. In a national panel, found that arguments for something become more salient and rehearsed in a persons’ mind than those in opposition – since we did not get baseline opinion or interview the subjects about why they chose their opinion, if the hypothesis are supported, we do not know if participants engaged in this self-persuasion process.
Another potential drawback to the study is the wording choice used to gather participants possible agreement or disagreement with the proposed policy change: both questions used words that could be considered in “inconsistent” and “antagonistic”. Experiments by Winter (2015) and Winter et. al. (2015) found that negative comments have more impact on content perception on a controversial issue. In their work, participants who read negative comments were more affected in individual who found the topic highly relevant or not, and suggested that for those who had high relevance to the issue, that the highly relevant more actively worked to contradict the negative comments. Future should consider more balanced language, adding positive-type words to assess agreement with the original news article.
Because we did not assess or gather individual traits of participants beyond demographic information, we could not capture motivation or thought process beyond the questions asked so there may have been other factors that could explain results. For example, Metzger et al (2010) found in their focus group of evaluating credibility online that if people giving opinions were similar to participants then the opinions of others could become an indicator of credibility. Or Knoblock-Westerwick, Johnson, Silver, Westerwick (2015) et. al., who found that people who read spent more time reading comments about science and technology issues were more empathetic than people who spent time viewing stories with numbers.
Overall, if results show support for the hypotheses, it would mean that comments can affect perception of story content on noncontroversial policy matters. This would news producers would have an incentive to take these spaces as seriously as the original story content. This may mean moderation of the comments or a change in how these spaces are designed, adding to literature that suggest design of content on the Web can affect credibility (Fogg 2003: Fogg et all 2003; Metzger 2007; Hilligoss, Rieh, 2007).
Studies, published or referred to, suggest moderated comments sections reduce incivility; and that readers prefer moderated sections (Levy, 2017). Even something as simple as adding a “Respect” button has changed the tone (Stroud, Muddiman & Scacco, 2017). Generally speaking, journalists are already tasked to carry multiple buckets in addition to reporting and writing – sometimes they are taking photos or video, they are monitoring and posting on social media platforms, etc. Studies also show that journalists do not think reading or participating in comments sections are worth their time (Chen & Pain, 2016; Nielsen, 2014). Most journalists do care about the content that the publish, so research might be needed to determine if content in the commentary is effecting reader interpretation of their work to show journalists that moderation may be needed.
Althaus, S. L., & Tewksbury, D. (2002). Agenda setting and the “New” news. Communication Research, 29(2), 180-207. doi:10.1177/0093650202029002004
Aschwanden, C. (2016, ). We asked 8,500 internet commenters why they do what they do. Fivethirtyeight Retrieved from https://fivethirtyeight.com/features/we-asked-8500-internet-commenters-why-they-do-what-they-do/
Borah, P. (2014). Does it matter where you read the news story? interaction of incivility and news frames in the political blogosphere. Communication Research, 41(6), 809-827. doi:10.1177/0093650212449353
BROSIUS, H., & BATHELT, A. (1994). The utility of exemplars in persuasive communications. Communication Research, 21(1), 48-78. doi:10.1177/009365094021001004
Bulkow, K., Urban, J., & Schweiger, W. (2013). The duality of agenda-setting: The role of information processing. International Journal of Public Opinion Research, 25(1), 43-63. doi:10.1093/ijpor/eds003
Camaj, L. (2014). Need for orientation, selective exposure and attribute agenda setting effects. Mass Communication and Society, doi:10.1080/15205436.2013.835424
Chaiken, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology, 66(3), 460-473. doi:10.1037/0022-3518.104.22.1680.
Chen, G. & Pain, P. (2016). Journalists and online comments. Engaging News Project. Retrieved from https://mediaengagement.org/research/journalists-and-online-comments/.
Chung, C. J., Yoonjae, N., & Stefanone, M. A. (2012). Exploring online news credibility: The relative influence of traditional and technological factors. Journal of Computer-Mediated Communication, 17(2), 171-186. doi:10.1111/j.1083-6101.2011.01565.x
Chyi, H. I., & Lee, A. M. (2013). Online news consumption. Digital Journalism, 1(2), 194-211. doi:10.1080/21670811.2012.753299
COCKBURN, A., & MCKENZIE, B. (2001). What do web users do? an empirical analysis of web use doi:https://doi.org/10.1006/ijhc.2001.0459
Diakopoulous, N. (2016). Artificial moderation: A reading list. Coral Project Blog. Retrieved from https://blog.coralproject.net/artificial-moderation-a-reading-list/.
Diakopoulos, N., & Naaman, M. (2011). (2011). Topicality, time, and sentiment in online news comments. Paper presented at the CHI’11 Extended Abstracts on Human Factors in Computing Systems, 1405-1410.
Diakopoulos, N., & Naaman, M. (2011). (2011). Towards quality discourse in online news comments. Paper presented at the Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, 133-142.
Dohle, M. (2018). Recipients’ assessment of journalistic quality. Digital Journalism, 6(5), 563-582. doi:10.1080/21670811.2017.1388748
Fogg, B. J. (2003). In ebrary I. (Ed.), Persuasive technology : Using computers to change what we think and do. Amsterdam ; Boston: Amsterdam ; Boston : Morgan Kaufmann Publishers.
Fogg, B.J., Soohoo, C., Danielson, D.R., Marable, L., Stanford, J., & Tauber, E.R. (2003). How do users evaluate the credibility of Web sites?: a study with over 2,500 participants. DUX ’03.
Graber, D. A. (. A. (1988). Processing the news : How people tame the information tide (2nd ed.. ed.). New York: New York : Longman.
Graham, F. K., & Clifton, R. K. (1966). Heart-rate change as a component of the orienting response. Psychological Bulletin, 65(5), 305-320. doi:10.1037/h0023258
Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context doi:https://doi.org/10.1016/j.ipm.2007.10.001
Hsueh, M., Yogeeswaran, K., & Malinen, S. (2015). ‘Leave your comment below’: Can biased online comments influence our own prejudicial attitudes and behaviors? Human Communication Research, 41(4), 557-576. doi:10.1111/hcre.12059
Iyengar, S. (1994). Is anyone responsible?: How television frames political issues University of Chicago Press.
Iyengar, S., & Simon, A. (1993). News coverage of the gulf crisis and public opinion: A study of agenda-setting, priming, and framing. Communication Research, 20(3), 365-383.
Iyengar, S., Peters, M. D., & Kinder, D. R. (1982). Experimental demonstrations of the “not-so-minimal” consequences of television news programs. The American Political Science Review, 76(4), 848-858. doi:10.2307/1962976
Iyengar, S., Peters, M. D., & Kinder, D. R. (1982). Experimental demonstrations of the “not-so-minimal” consequences of television news programs. The American Political Science Review, 76(4), 848-858. doi:10.2307/1962976
Katz, E., Adoni, H., & Parness, P. (1977). Remembering the news: What the picture adds to recall. Journalism Quarterly, 54(2), 231-239. doi:10.1177/107769907705400201
Kim, S., Scheufele, D. A., & Shanahan, J. (2002). Think about it this way: Attribute agenda-setting function of the press and the public’s evaluation of a local issue. Journalism & Mass Communication Quarterly, 79(1), 7-25. doi:10.1177/107769900207900102
Knobloch-Westerwick, S., Johnson, B. K., Silver, N. A., & Westerwick, A. (2015). Science exemplars in the eye of the beholder: How exposure to online science information affects attitudes. Science Communication, 37(5), 575-601. doi:10.1177/1075547015596367
Knobloch-Westerwick, S., Sharma, N., Hansen, D. L., & Alter, S. (2005). Impact of popularity indications on readers’ selective exposure to online news. Journal of Broadcasting & Electronic Media, 49(3), 296-313. doi:10.1207/s15506878jobem4903_3
Konijn, E. A., Veldhuis, J., & Plaisier, X. S. (2013). YouTube as a research tool: Three approaches. CyberPsychology, Behavior & Social Networking, 16(9), 695-701. doi:10.1089/cyber.2012.0357
Kumpel, A., & Springer, N. (2015). (2015). How user comments on a news site affect perceptions of journalistic quality. an experimental study using structural equation modeling. Paper presented at the Selected Papers of Internet Research 16: The 16th Annual Meeting of the Association of Internet Researchers. Phoenix, AZ.
Lang, A. (2006). The limited capacity model of mediated message processing. Journal of Communication, 50(1), 46-70. doi:10.1111/j.1460-2466.2000.tb02833.x
Lee, E. (2012). That’s not the way it is: How user-generated comments on the news affect perceived media bias. Journal of Computer-Mediated Communication, 18(1), 32-45. doi:10.1111/j.1083-6101.2012.01597.x
Lee, E., & Jang, Y. J. (2010). What do others’ reactions to news on internet portal sites tell us? effects of presentation format and readers’ need for cognition on reality perception. Communication Research, 37(6), 825-846. doi:10.1177/0093650210376189.
Levy, C. (2017). Press run: Introducing the reader center. The New York Times. Retrieved from
Mayo, J., & Leshner, G. (2000). Assessing the credibility of computer-assisted reporting. Newspaper Research Journal, 21(4), 68-82. doi:10.1177/073953290002100405
Metzger, M. J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research.(report). Journal of the American Society for Information Science and Technology, 58(13), 2078.
Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439. doi:10.1111/j.1460-2466.2010.01488.x
Mutz, D. C. (1997). Mechanisms of momentum: Does thinking make it so? The Journal of Politics, 59(1), 104-125. doi:10.2307/2998217.
Nagar, N. (2009). The loud public: Users’ comments and the online news media. Online Journalism Symposium.
Nielsen, C. E. (2014). Coproduction or cohabitation: Are anonymous online comments on newspaper websites shaping news content? New Media & Society, 16(3), 470-487. doi:10.1177/1461444813487958
Peter, C., Rossmann, C., & Keyling, T. (2014). Exemplification 20: Roles of direct and indirect social information in conveying health messages through social network sites. Journal of Media Psychology: Theories, Methods, and Applications, 26(1), 19-28. doi:10.1027/1864-1105/a000103
Petty, R. E., & Cacioppo, J. T. (1979). Issue involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses. Journal of Personality and Social Psychology, 37(10), 1915-1926. doi:10.1037/0022-3522.214.171.1245
Pingree, R. J., & Elizabeth, S. (2013). Differentiating cueing from reasoning in Agenda‐Setting effects. Journal of Communication, 63(5), 852-872. doi:10.1111/jcom.12051
Pingree, R. J., & Stoycheff, E. (2013). Differentiating cueing from reasoning in Agenda‐Setting effects. Journal of Communication, 63(5), 852-872. doi:10.1111/jcom.12051
Pjesivac, I., Geidner, N., & Cameron, J. (2018). Social credibility online: The role of online comments in assessing news article credibility. Newspaper Research Journal, , 0739532918761065. doi:10.1177/0739532918761065
Purcell, K., Rainie, L., Mitchell, A., Rosenstiel, T., & Olmstead, K. (2010). Understanding the participatory news consumer. Pew Internet and American Life Project, 1, 19-21.
Scheufele, D. A., & Tewksbury, D. (2007). Framing, agenda setting, and priming: The evolution of three media effects models. Journal of Communication, 57(1), 9-20. doi:10.1111/j.1460-2466.2006.00326.x
Shanahan, M. (2010). Changing the meaning of peer-to-peer? exploring online comment spaces as sites of negotiated expertise. Journal of Science Communication, 9(1), 1-13.
Soo, Y. R. (2002). Judgment of information quality and cognitive authority in the web. Journal of the American Society for Information Science & Technology, 53(2), 145-161.
Stroud, N. (2016, August 12). Readers Should Use the Online “Comment” Section of Newspapers More Often. UT News. Retrieved from https://news.utexas.edu/2016/08/12/lets-use-the-online-comment-section-of-newspapers-more.
Stroud, N. J., Muddiman, A., & Scacco, J. M. (2017). Like, recommend, or respect? altering political behavior in news comment sections. New Media & Society, 19(11), 1727-1743. doi:10.1177/1461444816642420.
Sundar, S. S., & Nass, C. (2006). Conceptualizing sources in online news. Journal of Communication, 51(1), 52-72. doi:10.1111/j.1460-2466.2001.tb02872.x
Thorson, K., Vraga, E., & Ekdale, B. (2010). Credibility in context: How uncivil online commentary affects news credibility. Mass Communication & Society, 13(3), 289-313. doi:10.1080/15205430903225571
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability doi:https://doi.org/10.1016/0010-0285(73)90033-9
von Sikorski, C., & Hänelt, M. (2016). Scandal 2.0: How valenced reader comments affect recipients’ perception of scandalized individuals and the journalistic quality of online news. Journalism & Mass Communication Quarterly, 93(3), 551-571. doi:10.1177/1077699016628822
Walthen C.N., & , B., J. (2001). Believe it or not: Factors influencing credibility on the web. Journal of the American Society for Information Science and Technology, 53(2), 134-144. doi:10.1002/asi.10016
Walther, J. B., David, D., Jinsuk, K., & Anthony, J. C. (2010). The influence of online comments on perceptions of antimarijuana public service announcements on YouTube. Human Communication Research, 36(4), 469-492. doi:10.1111/j.1468-2958.2010.01384.x
Walther, J. B., David, D., Jinsuk, K., & Anthony, J. C. (2010). The influence of online comments on perceptions of antimarijuana public service announcements on YouTube. Human Communication Research, 36(4), 469-492. doi:10.1111/j.1468-2958.2010.01384.x
Winter, S., & Kramer, N. (2016). Who’s right: The author or the audience? effects of user comments and ratings on the perception of online science articles. Communications, 41(3), 339. doi:10.1515/commun-2016-0008
Winter, S., Brückner, C., & Krämer, N. C. (2015). They came, they liked, they commented: Social influence on facebook news channels. Cyberpsychology, Behavior, and Social Networking, 18(8), 431-436. doi:10.1089/cyber.2015.0005
Winter, S., Brückner, C., & Krämer, N. C. (2015). They came, they liked, they commented: Social influence on facebook news channels. CyberPsychology, Behavior & Social Networking, 18(8), 431-436. doi:10.1089/cyber.2015.0005
Young, M. K. (2007). How intrinsic and extrinsic motivations interact in selectivity: Investigating the moderating effects of situational information processing goals in issue publics’ web behavior doi:10.1177/0093650206298069
Zerback, T., & Fawzi, N. (2017). Can online exemplars trigger a spiral of silence? examining the effects of exemplar opinions on perceptions of public opinion and speaking out. New Media & Society, 19(7), 1034-1051. doi:10.1177/1461444815625942
Ziegele, M., Breiner, T., & Quiring, O. (2014). What creates interactivity in online news discussions? an exploratory analysis of discussion factors in user comments on news items. Journal of Communication, 64(6), 1111-1138. doi:10.1111/jcom.12123
Zillmann, D. (1999). Exemplification theory: Judging the whole by some of its parts. Media Psychology, 1(1), 69.
Zillmann, D. (2000). In Brosius H., NetLibrary I. (Eds.), Exemplification in communication the influence of case reports on the perception of issues. Mahwah, N.J: Mahwah, N.J. L. Erlbaum Associates.
Zillmann, D., Gibson, R., Sundar, S. S., & Perkins, J. W. (1996). Effects of exemplification in news reports on the perception of social issues. Journalism & Mass Communication Quarterly, 73(2), 427-444. doi:10.1177/107769909607300213
Zillmann, D., Perkins, J. W., & Sundar, S. S. (1992). Impression-formation effects of printed
news varying in descriptive precision and exemplifications. Zeitschrift für Medienpsychologie, 4(3), 168-185.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "Media"
Media refers to large-scale communication, delivering content, entertainment and information to viewers. Media can include television, radio, the Internet, online and physical publications, billboards, and much more.
Teaching Art in the Age of Digital Media
The purpose of this study is to understand how high school visual arts educators teach visual arts in the age of digital media....
Impact of Social Networking Sites on Children
Social networking sites offer people new and varied ways to communicate via the internet, whether through their PC or their mobile phone. Examples include MySpace, Facebook and Bebo. They allow peop...
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: