The Ministry of Digits proposed to introduce marking in Russia for all texts and drawings created by AI
The Ministry of Digits proposed to introduce marking in Russia for all texts and drawings created by AI

The Ministry of Digits proposed to introduce marking in Russia for all texts and drawings created by AI

The Russian university RTU MIREA proposed to the Ministry of Science and Technology a number of initiatives aimed at ensuring the safety of the use of artificial intelligence. In particular, there is an initiative to introduce mandatory labeling of content created with the help of AI. How this will be done technologically is unknown. Experts believe that Russian AI services will be obliged to automatically mark any texts generated within the service. In their opinion, the implementation of the initiative could lead to the official blocking of foreign AI services, as content created by them is simply difficult to recognize.

AI Marking

Russian Technological University (RTU MIREA) asked the Ministry of Digital Economy to introduce mandatory labeling of content created using artificial intelligence (AI) and prepare a program to protect “critical infrastructure” (apparently, we are talking about critical information infrastructure – CII) from possible cyber attacks using neural networks. TASS writes about it with reference to the letter of RTU MIREA rector Stanislav Kudzha to the head of the Ministry of Information Maksut Shadaev.

“RTU MIREA proposes… to fix mandatory marking of content created with neural network technology with a special graphic sign within the framework of the federal project “Artificial Intelligence” of the national program “Digital Economy” and provide for development of a corresponding resolution of the Russian government, … to prepare and approve a program of protection of critical infrastructure from possible cyber attacks using neural networks,” says the letter cited by TASS.

RTU MIREA also suggested supplementing the Ministry of Science and Technology’s order No. 734 “On determining threats to personal data security” dated December 21, 2020, with a separate paragraph on threats of unauthorized access to personal data of people without appropriate authority in information systems and further use of this data with the use of AI.

The Ministry of Digital has been asked to label all AI content

“It is clear that without the introduction of additional controls on the development of artificial intelligence, neural networks will already be able to hack the security of computer programs, including in areas critical to the economy, much more effectively than humans in the next few years,” the letter stated.

In addition, RTU asks to add to the special project of the Ministry of Science and Technology “CyberLife” of the federal project “Information Security” a module on AI called “Combating misuse of neural networks”. RTU found out from the experts that the ranking of dangers of neural networks among the surveyed participants is headed by a possible increase in cybercrime, follows from the letter of the university. Also the university noted that the massive use of AI even for entertainment purposes could jeopardize the safety of personal user data. Plus, the development of neural networks can already cause a shortage of jobs in a number of specialties.

It’s good when it’s moderate.

Alexei Sergeev, head of machine learning and artificial intelligence practice at Axenix (formerly Accenture), assured CNews that moderate regulation is a good thing. Someone has to provide safeguards for the public and set the rules of the game.” A similar issue arose in the context of the regulation of recommendation systems; it has now receded into the background, given the technological changes over the past year. Here, as in that situation, the position of business is simple: do not go overboard with restrictive measures to the extent that it hurts the dynamics of technology development in Russia. We are in a catching up position, given that both ChatGPT and Midjourney are products of Western companies and investments,” – said the expert.

According to him, now it is necessary to find the right balance and targeting of control measures, otherwise it will create an incentive for technological entrepreneurs to act illegally, to launch their products in other jurisdictions, and will contribute to the leakage of valuable competences from Russia.

Asya Vlasova, managing partner of iTrend, a communications agency for IT companies, told CNews that labeling AI content is a logical step for the regulator. “First of all, AI services carry the danger of huge arrays of not only unverified texts, but even completely fictitious facts, figures and data. Given that artificial intelligence is constantly learning, over time it will refer to itself, and the number of fakes will grow exponentially,” said the expert.

In her opinion, it is also dangerous to collect people’s personal data (photographs, personal posts, geo-tags from social networks), to train AI by examples of personal correspondence and use this information for social engineering, for hacking systems or even simple blackmailing. AI content (both textual and graphical) in general opens up a whole new space for information attacks and fake news. However, no matter how logical it is to introduce tools to control AI content, it will inevitably face two obstacles. The first is that it is extremely difficult to identify AI content: it is still unclear what kind of philological expertise will allow to recognize the degree of machine and human involvement in the same text, because humans are constantly retraining the algorithm on new data sets, can take AI content as a basis and refine it, etc., says Vlasova.

“It is likely that Russian AI services will be required to automatically mark any internally generated texts. Texts generated by foreign services will be unlikely to be recognized. Such services may be officially blocked in the future, but if necessary, attackers can always bypass blocking,” she said.

The second problem is the reluctance of users themselves to advertise that certain content was created by a neural network, Vlasova said. “While AI authorship is not a problem when creating product descriptions, it’s not always beneficial for a copywriter or designer to acknowledge the use of AI in their work,” she said. According to ANO Digital Economy, investment in AI projects in Russia increased by 170 percent in 2021, both from private investors and the government, she said.

“Therefore, attempts to regulate this sphere are understandable and probably timely and justified. But so far we’ve only seen one attempt at labeling goods and services in the creative industries – in September 2022, new rules for labeling advertising on the Internet went into effect . The system is still being worked out. For example, there is still debate as to which creatives should be classified as advertising, and whether each press release or news item should be labeled. According to the clarification of FAS from November 21, 2022 any materials can be considered advertising, if they focus on one product, there are calls to buy it. To avoid fines, they label everything. In fact, the Russian advertising and marketing associations together with the association of specialized lawyers recently asked the legislators to soften the responsibility for violation of the labeling rules for Internet advertisements and to postpone the forthcoming introduction of sanctions until at least 2024,” Vlasova said.

AI brings risks

he founder and Product Owner, Comply Sergey Saiganov told CNews that neural networks are a powerful tool in the hands of cybercriminals due to the ability to instantly analyze and generate large amounts of information. Therefore, it is absolutely necessary to update lists of security threats with possible attack vectors using neural networks, as well as to train future specialists, the expert said.

“However, the purpose and mechanisms of labeling materials generated by neural networks are not quite clear. In particular, who will bear such responsibility: the author of content, which is actually impossible to control, or the digital site where the neural network content is placed, which in turn will be an additional burden on an already huge block of responsibilities for site owners. The purpose of such marking is doubtful: evidently, the criminals, practicing dip-fraud, will not voluntarily impose a fake marker”, – said the expert.

Alexander Partin, co-chairman of the RAEC Privacy & Legal Innovation cluster, told CNews that if we talk about the introduction of the requirement to mark content, it makes sense to do so for audio, photo or video materials that use the image of a famous person and claim to be realistic (technologies known as dipfakes).

“Protecting critical infrastructure from cyberattacks using neural networks is certainly a good idea. Another issue is that the neural network as a technology appeared quite a long time ago, and therefore should already be taken into account when determining threats to data security (including personal). That is why the amendment of the order of the Ministry of Digital and the adoption of the government decree make sense only if it entails practical actions to improve the level of protection, rather than remain a declaration on paper,” Partin said.

Loading

FavoriteLoadingAdd to favorites
Spread the love

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.