Artificial Intelligence Under Control

Artificial Intelligence Under Control

In 2023, the world was rocked by OpenAI’s generative artificial intelligence, ChatGPT, which can create images, videos, texts, code and more. Experts call it a breakthrough and warn that further development of the technology could lead to a drastic change in the labor market and even get out of control.

EU and Google to develop AI pact

Last week it became known that Google and the EU are planning to jointly develop a new pact on AI. Earlier, the Vice-President of the European Commission on Digital Agenda Margrethe Vestager said that the EU is developing rules to regulate the generative artificial intelligence. For example, there are plans to introduce mandatory labeling for photos, videos and songs created by AI to protect people from discrimination.

The European Commission is seeking to develop an agreement on artificial intelligence (AI) involving European and non-European companies before rules are set to regulate the technology, said Thierry Breton, the European commissioner for digital technology.

The U.K. has a similar approach. According to Bloomberg, the U.K. government has invited the heads of some of the world’s largest artificial intelligence firms to discuss AI controls.

“We’re thinking about how we can use our position as a world leader in AI development to convene the international community to make sure we have some standards and ground rules,” said British technology minister Paul Scully.

G7 discusses international regulation of AI

Microsoft President Brad Smith said he is most concerned about the development of artificial intelligence with dipfakes – realistic-looking but false videos and images. He said people need to know when photos or videos are real and when they are AI-generated.

For weeks, lawmakers in Washington have been debating laws to control AI, and companies have sought to bring increasingly versatile neural networks to market. Smith also called for “safety brakes” to control power grids, water supplies and other critical infrastructure so that humans would not lose control over them.

Officials from the countries of the Group of Seven (G7) will meet on Tuesday, May 30, to discuss the problems associated with generative AI tools such as ChatGPT.

According to Reuters, the leaders of the G7, which includes the U.S., EU and Japan, agreed last week to create an intergovernmental forum called the Hiroshima Artificial Intelligence Process to discuss these goals. It will be the first working meeting to address such issues as intellectual property protection, disinformation and technology management.

At last week’s G7 summit in Hiroshima, leaders also called for the development and adoption of international technical standards to ensure that AI remains “trustworthy” and “consistent with shared democratic values.

The main concerns are that technologies that can create authoritative and human-sounding text and generate images and videos, if allowed to develop unhindered, could become powerful tools for disinformation and political subversion.

The World Health Organization has said that introducing AI too quickly carries the risk of medical errors, which could undermine confidence in the technology and delay its development.

G7 leaders acknowledged that legislation has not kept pace with the rapid development of AI. They noted that while new technologies offer opportunities for development and innovation in various industries, their challenges must be considered alongside the benefits. For example, the heads of state identified five principles for the responsible use of AI. In their view, new technologies must be governed “in accordance with shared democratic values, including accountability, safety, protection against online harassment, respect for privacy, and protection of personal data.

About the author

JOIN THE DISCUSSION