Are we ready for an AI-generated newscast?

GMA News made an expensive gamble. It introduced AI-generated sportscasters on its social media platform.

It made netizens curious about the technological experiment, watching male and female avatars reading sports news.

It was a breakthrough, but there was a backlash. There were some praises, but it was widely bashed, indicating that Filipinos were not ready to see robots reading sports stories on television or any social media platforms.

What motivated GMA News to experiment with AI-generated sportscasters still needs to be determined.

Was it a cost-cutting measure? Did technology drive it? Was it really a trail-blazing innovation in television journalism?

Testing it on its social media platforms could be an admission that the Philippines was not ready for AI-generated newscasts on free television.

In some other countries in Southeast Asia, like Malaysia, some networks have been using AI-generated news readers. And local audiences have embraced it. Not in the Philippines yet.

However, journalism cannot escape artificial intelligence. In fact, many foreign and local newsrooms have been using artificial intelligence to speed up workflow and enhance news presentation.

For instance, Bloomberg and Reuters News have started using artificial intelligence in reporting economic indicators like GDP, inflation, and market data.

It was much faster and less prone to error and cut costs, letting go of some journalists in many smaller news bureaus worldwide.

There have been three waves of AI innovation in the past decade: automation, augmentation, and generation.

In the very early phase, the focus was on automating data-driven news stories, such as financial reports, sports results, and economic indicators, using natural language generation techniques.

For instance, Bloomberg News and Reuters are well ahead in automating finance-driven reports, like economic indicators – GDP, inflation, debt, export and import data. AFP and AP are also doing it.

The second wave arrived when “the emphasis shifted to augmenting reporting through machine learning and natural language processing to analyze large datasets and uncover trends.”

An excellent example of this is an Argentine newspaper, La Nacion, which started using AI to support its data team in 2019 and then set up an AI lab in collaboration with data analysts and developers.

The third and current wave is generative AI after ChatGPT was introduced in November 2022.

It’s powered by large language models capable of generating narrative text at scale.

This new development offers journalism applications beyond simple automated reports and data analysis.

Now, a chatbot can be asked to write a longer, balanced article on a subject or an opinion piece from a particular standpoint. It can also do so in the style of a well-known writer or news publication, like the New York Times or the Guardian in the United Kingdom.

Ideas for possible uses for this technology have multiplied, with journalists often testing chatbots’ capabilities to write and edit.

Part of the reason why ChatGPT and other tools have generated so much excitement may be the fact that they are so consumer-friendly and can communicate in natural language.

The language models these tools work with mean they respond to prompts when generating new content rather than developing the ideas themselves.

The model is trained on a set of content and data and generates new output based on what it was trained on.

This means that while it could be helpful in synthesizing information, making edits, and reporting, generative AI needs to include some essential skills that will prevent it from taking on a more significant role in journalism.

Based on where it is today, it’s not original. It’s not breaking anything new. It’s based on existing information. And it doesn’t have the analytic capability of a trained journalist.

Because of this, generative AI can’t meet the demand for more analysis or a more developed take on a subject, something readers look for when they go to outlets like the Financial Times, Reuters, and Bloomberg.

Nothing can ever replace a journalist. The language models are not creative or original or generate anything new in any way. AI is mimicking it well.

The models often need help generating accurate and factual information regarding current events or real-time data.

This suggests that AI tools still need to be audited for breaking news reporting, a complex and expensive operation that requires careful fact-checking and cross-referencing of information.

Generative AI models have also needed help with numbers. The generative AI needs to be more accurate when it comes to computing exact calculations.

However, this doesn’t mean that generative AI has no role in journalism.

There should be caution about using new tools without human supervision.

AI is not about the total automation of content production from start to finish; it is still about augmentation to give professionals and creatives the tools to work faster, freeing them up to spend more time on what humans do best.

Editors who already use AI said it is useful in transcription, summarizing speeches and past statements, and looking up at contradictory positions on issues.

Human journalism is also full of flaws and could mitigate the risks through editing.

The media should be working with the technology in a way that acknowledges and counters its current pitfalls.

For instance, in the Newsroom, an app developed in the US, articles were written by AI and manually reviewed by humans. Humans will always be part of the oversight process.

The app has done global topics on geopolitics, climate, etc. It also started with topics with lower risk, like sports.

Currently, the app only uses English-language source pieces and publishes summaries in English.

What is the future?

The explosion of data from sources such as the web, sensors, mobile devices, and satellites has created a world with too much information.

The information produced now is more than at any other point in history, making it much more challenging to filter out unwanted information.

And this is a side of journalism in which AI can play a crucial role in lessening human workload.

AI should not only be seen as a tool to generate more content but also to help filter it.

Some experts predict that by 2026, 90% of online content could be machine-generated.

This marks an inflection point, where the focus is on building machines that filter out noise, distinguish fact from fiction, and highlight what is significant.

Journalists should play a role in the development of new AI tools.

For example, by writing editorial algorithms and applying journalistic principles to the new technology.

The news industry must be actively engaged in the AI revolution.

International media companies and local news agencies have an opportunity to become major players in the space – they possess the most valuable assets for AI development: text data for training models and ethical principles for creating reliable and trustworthy systems.

The post Are we ready for an AI-generated newscast? appeared first on #PressOnePH.


Post a Comment

0 Comments