Does AI pose a fatal threat to journalism? Columbia University School of Journalism authoritative report gives you the answer | Leibao
Text | Li Xiuqin
Report from Leiphone.com (leiphone-sz)
Recently, the Tow Center for Digital Journalism and the Brown Institute for Media Innovation held a policy exchange forum with technology experts and journalists to discuss how AI affects news media and how to better apply AI to the news field. In this report, they focused on the following four issues:
-
How can journalists use AI to assist in reporting?
-
Which newsroom roles could AI replace?
-
What are some areas where news organizations are not yet using AI technology?
-
Will AI eventually become an integral part of news reporting?
Topic 1: Application of AI in Journalism
As you can see, every newsroom has a unique approach to AI. After studying several case studies, there are three categories of activities where AI has made the greatest contribution in newsrooms:
-
In cases where the data volume is large or complex, AI can serve as a breakthrough tool, eliminating external or special cases of human verification - a role that fits perfectly into standard newsroom processes.
-
Identifying trends (or deviations from trends): AI’s massive computing power can help provide representations of data in aggregate, or potentially by time, geographic, or demographic groupings. It can also quickly identify outliers.
-
Examining the applications of AI or computing can be the subject of a story in itself: Algorithms are built by humans, so they inevitably carry human biases - how do you find complex ideas through these tools? And what unpredictable situations will occur when these tools are adopted and applied by specific countries or cities?
1. Several specific cases
There are several ways AI can enhance journalists’ work: by classifying documents; by identifying outliers in data. Of course, experienced humans often use real news to make judgments during the discussion as an important part of the newsroom’s work.
While there are many well-documented stories written by AI, such as summaries of sports events, company earnings releases, and even data patterns such as earthquakes, few attendees believed that journalists’ jobs will be completely replaced by machines or algorithms. Only with human manipulation and verification of results, AI can better help freelance writers continuously rewrite the same stories and handle more original reporting.
Some recent cases, such as Los Angeles Times reporters using classifiers to detect instances of LAPD (Los Angeles Police Department) downgrading crime classifications, have achieved remarkable success; Atlanta Constitutional Institute's investigation of sexual abuse of doctors; Reuters' topic modeling to seek access to the Center for Rights in the Supreme Court; ProPublica (a non-governmental, non-profit online news organization in the United States) recently launched a machine learning-based tool in conjunction with Google - the Hate Crime News Record Index. It analyzes a large number of news articles to establish a nationwide early warning map that can predict where hate crimes occur; The New York Times used facial recognition technology to obtain information about the audience seats when reporting on the news event of President Trump's inauguration.
For some journalists, they may grab sample code from GitHub and apply it to their news reports. However, unless the journalist has a good understanding of these tools or technologies, there may be a risk of illegal behavior.
2. Journalists should be aware of pitfalls when using data
Journalists should beware of pitfalls when using data from social media to government agencies. They must carefully assess the reliability of these new sources of information, especially when AI is involved. For example, journalists who use Twitter as a social media platform must be cautious about using this data to analyze social behavior.
3. Publishers’ challenges: both large and small news organizations
With all these new tools, news organizations have the responsibility and obligation to train reporters, editors, and developers on how to use them properly. While funding may not be a problem for large news organizations like The New York Times, it will be a challenge for smaller news organizations with fewer resources.
One decision newsroom leaders may face is how to collaborate with others on the use of AI tools, since investigative analysis and team building using complex data sets and custom algorithms can take months, and not all newsrooms can do it on their own.
Collaborating with academic institutions and researchers can be a great way for news organizations to begin using AI tools in their newsrooms. However, the cultures of newsrooms and academic labs are very different, and the two may have different goals for creating AI tools.
Topic 2: How does AI technology adapt to news reporting rules?
How do AI technologies fit into the news pipeline? As mentioned earlier, AI is playing an increasing role in reporting, content creation, distribution, and audience engagement. For example, in recent years, the development of crowdsourcing, brainstorming, and fact-checking tools are being used to collect data information, especially for structuring relevant data. In contemporary newsrooms, automation has become a key tool in the competition, not only for customer attention, but also for competing with large platforms such as Netflix, Facebook, and Amazon.
1. Automated writing and personalized recommendations
Automation can handle a large number of tasks in a short period of time, such as analyzing and summarizing large amounts of data in a few minutes or even seconds, thereby reducing the burden on journalists as much as possible. On the other hand, many social media platforms and Internet companies have also demonstrated that personalized push is a powerful tool to capture attention. For example, Netflix uses behavioral data to provide viewers with viewing recommendations; Amazon's success is partly due to its data-driven personalized design for shopping experiences.
1) Case 1: Wibbitz
Wibbitz has started working with the sports reporting department of USA Today. The company can create short videos in just a few seconds from text reports written by media reporters. The core AI technology of Wibbitz is "Text-to-Video Technology". At the beginning, Wibbitz's AI technology will analyze a story text, and then form a summary based on the text report. Then, AI will convert this text summary into a short video with voice-over accompanied by photos, graphics and other more media forms. In fact, the entire production process is to use AI-driven software to compress the content of a text report into a story script, and then string together a series of images or video clips, and add some voice-overs.
2) Case 2: BuzzFeed
BuzzFeed is another well-known media outlet that has entered the AI field. During the 2016 US election, BuzzFeed's "Open Labs for Journalism" developed a news robot (Buzzbot) that can collect news information from different sources at the Republican National Convention. AI-driven news aggregators can track real-time election results and voting reports, so that media reporters do not have to complete these tasks through manpower. With the news robot, BuzzFeed reporters can focus on reporting more complex and scenario-based news stories, which are the kind of news that machine learning solution technology cannot produce on its own.
3) Case 3: Reuters
In order to solve the problem of identifying true and false information, Reuters uses a new news tracking system called News Tracer to calculate 500 million Twitter messages every day to find real news events from fake news, unreasonable news, advertisements, and noise. With the help of algorithms, journalists can get rid of the numerous social media information and spend more time digging for stories. The difference between News Tracer and other monitoring tools is that it imitates the way journalists think. Programmers embed 40 evaluation indicators in this algorithm, such as the location and identity of the original poster, the way the news is spread, etc., to establish a news credibility score. The system will also cross-check the source of the news that journalists have determined to be reliable and identify other potential sources of information.
4) Case 4: Associated Press
The Associated Press was one of the first media organizations to adopt AI technology. As early as 2014, the Associated Press worked with Automated Insights, a US company that developed the automated writing program Wordsmith, which was the world's only public natural language generation platform at the time, to programmatically write news reports related to the quarterly earnings reports released by many listed companies. Before using artificial intelligence technology to process quarterly earnings reports, AP journalists could only create a few hundred news stories each quarter, resulting in thousands of companies' earnings reports not being written. After using the Wordsmith automated writing program, the number of news reports on corporate earnings by the Associated Press increased 12 times.
2. Comment system and audience participation
In June this year, the New York Times cooperated with Jigsaw, a technology incubator under Google's parent company Alphabet, to use the latter's Perspective machine learning technology to filter the number of comments on news reports. According to Leifeng.com, the New York Times previously arranged 14 reviewers to handle about 12,000 comments every day, and 20% of the comments under each article were opened. After using this AI tool, it can separate harmful comments from healthy and correct comments, which can not only reduce the workload of comment reviewers by 25%, but also increase the opening rate of comments under articles to 80%.
The New York Times wants to use this AI tool to build a platform for reviewers and readers to interact more deeply. However, there is still a big challenge, that is, how to establish common ground and respect different viewpoints, so that news reports and readers' regional views are consistent. Through this machine learning tool, reviewers can not only increase the speed of processing comments, but also easily combine similar comments through predictive models.
Topic 3: Algorithms and ethics: Should we blame humans or algorithms?
The use of AI tools in newsrooms, such as machine learning, natural language processing, facial recognition and machine vision, will inevitably bring traces of human ethical thinking. This involves three aspects.
1. Transparency and Accountability
Because AI can play so many roles in journalism, it’s important to be careful when explaining when, where, and how AI is used. For example, when it comes to chatbots and user interactions, if they are powered by AI, how can the bot explain to the audience how it works? The audience needs to know how the story is constructed and what choices the machine made during the creation process? When it comes to AI, whose faults are ultimately to be blamed? How to explain errors caused by algorithms created by humans? Should humans or algorithms be blamed?
According to ProPublica’s research, algorithmic bias is mathematically inevitable. Even so, journalists should be responsible for these AI systems and encourage accountability in the process of building algorithmic systems.
2. Editorial decisions and bias
The role of algorithms in news curation is becoming more and more common, and these algorithms that make editorial decisions on behalf of others need to be written in a human way. Take chatbots as an example. Computers are just like humans. If they don’t understand the content, they can’t have a conversation. The only area that a bot can talk about is that we can build a model for the context of that conversation.
In addition, to complicate the concept of debiasing, data is often made more neutral. Some studies have shown that there are many types of machine learning, all of which are used for "supervised learning." Algorithms cannot reproduce human psychological models, but can reconstruct causal relationships.
3. Ethical use of data
The ethical use of data is a fundamental issue that every journalist needs to face, and the same principles apply to companies that handle large amounts of data. Although there are many social media platforms that provide data to journalists, the relationship between data publishers and platforms regarding open access to data remains complex.
The “black box” nature of many algorithms obscures critical awareness of the decisions the software is making, so journalists need to use this critical attitude in their research and reporting whenever possible.
Seven research conclusions
As mentioned earlier, through this research, we can get the following 7 major findings about whether AI is a threat or a help to the journalism industry.
-
AI tools can help journalists tell or report new stories that were previously impractical or technically impossible. While AI may transform journalism, it will augment rather than replace the work of journalists. In fact, in order to use AI technology properly, humans must remain alert at all times.
-
There is a knowledge gap and communication gap between the technicians who design AI and the journalists who use AI technology, which may lead to malpractice in news events.
-
Readers deserve a transparent approach to how AI tools are used to conduct analysis, identify patterns, and report findings in stories.
-
While the intersection of AI and data can provide new opportunities for reader engagement, monetization, and personalized news feeds, there are challenges in finding a balance between creating echo chambers and remaining committed to the public service mission of journalism.
-
The ethical use and disclosure of data (how to collect, store, use, analyze and share user information) is a fundamental issue that journalists need to face.
-
AI has the potential to enhance the work of journalists, but challenges remain in open access to data.
-
AI is unpredictable. We can’t confidently predict where the biggest problems will arise, so technologists and journalists need to remain vigilant to ensure AI systems are accurate.
Conclusion
As things stand, AI is a far greater help to the news industry than a threat. In the future, the use of AI to assist reporting will become a major trend in the competition among news organizations. However, on the road to applying AI, humans should also clarify the ethical accountability of algorithms as soon as possible in order to eliminate future troubles in advance.
Note: The original report comes from the Columbia University School of Journalism and was co-authored by Mark Hansen, Meritxell Roca-Sales, Jon Keegan and George King. Leifeng.com focused on compiling and interpreting the entire report.
Hottest
Jack Ma's speech / iOS 11 black technology / Nvidia's "nuclear bomb" / WP death revelation Google Conference Summary / Apple Conference Summary / SF Express Technical Barriers Details of Smartisan Technology’s bankruptcy / Xiaomi’s ecological chain scams / online prostitution solicitation
● ● ●
Download Report
IBM cloud computing, location chain, Internet of Things and other fields trend report Keywords: 0629 The most comprehensive report on research, application, and interviews with people in the field of artificial intelligence Keywords: 0633 An interesting website that can calculate your chances of being taken away by a robot Keywords: grabbing jobs Goldman Sachs predicts that the drone market will be worth $100 billion in a few years
Keywords:
Goldman Sachs drone
What car will we drive in 2025? This Goldman Sachs report gives the answer
Keywords:
Automobile revolution
Cambridge: What is the current situation of virtual currency? Analysis of the big report Keywords: virtual currency investigation Artificial intelligence industry salaries exposed, it's time to change careers Keywords: salary
|
Featured Posts