A Google artificial intelligence is being developed with the aim of creating and writing news. As reported by The New York Times, it is a tool that has already been presented to large news companies such as Washington Post, New Corps and even to themselves.
Details are not yet known, nor what kind of relationship it will have with Google Bard, the AI that is already in operation, or with the other tools that can be found in the market with the same objective.
What will Google’s new artificial intelligence be used for to generate news?
Although it has not yet been introduced, the working title of the new tool is Genesis and it would function as an aid to journalists, not a replacement per se. This so-called journalism bot could generate news from being provided with details of events. In principle, it would be a kind of personal assistant to journalists, offering different writing styles or alternative titles. However, given the rumors of the jobs that AI could replace, perhaps this Google development could one day also replace journalists.
It should be noted that a Google press release provides clarification on this issue. More precisely, they stated that this tool is not intended to and cannot replace the essential role of journalists in reporting, creating and verifying their stories.
At the moment, the tool is still in the development stage, although it is sufficiently advanced to be presented as an option to large media companies.
AI tools for news generation: can they spread misinformation?
One of the main concerns about the use of AI for news generation is the possibility of misinformation being disseminated. It should be remembered that there have been cases in which artificial intelligence has disseminated false or strange information, called hallucinations, especially in the case of ChatGPT. These chatbots, like artificial intelligence systems, are not very adept at the process of verifying information.
Using this type of technology for news production creates a credibility problem. This is especially true if they are disseminated in major media outlets.
False information disseminated by such tools also creates legal problems. In Georgia, for example, a broadcaster filed a lawsuit against ChatGPT. The lawsuit was filed because the bot provided erroneous information about a court case in which the broadcaster was the protagonist. More precisely, he claimed that he was guilty of embezzling funds and defrauding his employer.
On the other hand, there were also problems with a lawyer who filed cases provided to him by artificial intelligence. It was shown that these cases were completely invented by the AI, without being based on real cases.
In addition to this, it must be considered that this is a tool that will only be able to create news from information provided to it. It will not be useful for interviewing, nor for conducting research from scratch, no matter how developed and complex the AI may be. We will have to wait for further development before Google’ s artificial intelligence can really replace the work of journalists.