Artificial Intelligence: A Legal and Societal Issue for Morocco and Europe

Artificial intelligence (AI), as illustrated by applications such as Midjourney , Dall-E and the famous ChatGPT, represents a major turning point in global technological evolution. While its applications promise to revolutionize various economic, social and legal sectors, they also raise profound questions about the regulation of this technology and the fundamental rights it could affect. In particular, the issues of intellectual property, access to justice and the regulation of generative artificial intelligence are at the heart of legislators’ concerns, both in Morocco and in Europe.

Artificial intelligence (AI) is unquestionably at the heart of contemporary legal concerns, particularly in terms of regulation. For several years now, it has been attracting not only scientific and economic interest, but also increased attention from legislators worldwide. The European Union was one of the first to take up this issue, with draft regulations aimed at framing the use of AI in areas as varied as data protection, civil liability and security. After several years of discussion, AI law now looks set to become a reality.

The European Union pioneers the Artificial Intelligence Regulation

On June 13, 2024, the European Union adopted its Regulation on Artificial Intelligence, a major legislative breakthrough that lays the foundations for strict supervision of emerging technologies. The particularity of this text lies in the integration of mandatory human control for

all AI systems, especially those termed “high-risk”. It is now required that humans be involved in the monitoring and operation of these technologies, to guarantee their compliance with safety, ethical and fundamental rights standards.

The meteoric rise of generative artificial intelligence, illustrated by applications such as ChatGPT, has highlighted the importance of regulating these fast-growing technologies. This phenomenon, which has revolutionized the way individuals and companies interact with machines, has also raised concerns about its ethical, social and legal implications. In light of this, the European Union has taken legislative action to frame the use of these systems through the adoption of the AI Regulation (RIA), which came into force on August 1, 2024 and is due to be fully enforced from August 2, 2026. This regulation addresses in detail the issue of general-purpose AI systems (GPAI) and GPAI models, introducing transparency obligations and specific rules for providers of these technologies.

Generative AI: a global phenomenon

At the end of 2022, the emergence of ChatGPT by OpenAI marked a turning point in the use of generative AI. This language model, based on pre-entrained transformer (GPT) technology, is capable of generating text, as well as images, video and audio, from simple instructions. These systems are based on so-called “general- purpose” AI models, capable of performing a multitude of tasks, regardless of the application domain. They are having a major impact, particularly in the fields of content creation, customer service, education and research.

 However, the rise of these technologies has also highlighted risks, particularly in terms of misinformation, identity fraud and copyright infringement. It became obvious that clear regulation was needed to prevent the misuse of these technologies and ensure their compliance with ethical and legal standards.

The AI Regulation (RIA): an ambitious legislative framework

Against this backdrop, the RIA has been drawn up by the European Union to provide a framework for AI systems, with a particular focus on general-purpose AI systems (GPAI) and GPAI models. The main aim of the regulation is to ensure that these technologies are transparent, safe and respectful of fundamental rights.

Transparency obligations for CAPM system suppliers

Article 50 of the RIA imposes specific transparency obligations on providers of IAM systems. This includes the need to clearly mark AI-generated or manipulated content, such as images, video or text, as artificial. These systems, such as Midjourney or Dall-E, will need to incorporate robust and reliable marking techniques, such as the use of digital watermarks for visual content.

However, areas of uncertainty remain. For example, Article 50 refers to chatbots, such as ChatGPT, which must inform users when they interact with a machine. But this rule raises practical and legal questions, particularly as regards its specific application to GPAI systems. In addition, labeling creative or artistic content as artificial could pose dilemmas, as it is

difficult to determine when content is clearly creative and exempt from the strict requirements of the regulation.

Special rules for GPAI models

The regulation also distinguishes “GPAI models”, which are AI systems that can be integrated into a variety of applications and services. These models are defined as general technologies capable of performing a wide range of tasks. Suppliers of these models must comply with specific rules, in particular with regard to technical documentation and transparency on the data used to train the models.

Obligations include providing information on training content, including copyright issues. Suppliers must ensure that the content used to train their models does not infringe intellectual property rights, unless specific exceptions are provided for by legislation. In addition, specific rules apply to models deemed to be at ” systemic risk” due to their advanced technical capabilities, such as OpenAI’s GPT-4. These models are subject to additional monitoring, risk assessment and cybersecurity requirements.

Severe penalties for non-compliance

The RIA provides for strict penalties for violations of its provisions. Providers of GPAI systems can face administrative fines of up to 15 million euros, or 3% of their annual worldwide sales. In addition, fines may be imposed if a supplier fails to comply with transparency obligations, or refuses access to its model for evaluation by the competent authorities.

The question of systemic risks and personal data

 One of the major concerns of the regulation is the systemic risks that certain AI models could generate. If a model has extremely advanced capabilities that could have a significant impact on users or society, it may be included on a public list of models at systemic risk. This assessment takes into account several criteria, such as the size of the model, the volume of computation used to train it, and the number of users it reaches.

Another issue that remains under-explored in the regulation is that of personal data protection. Given that GPAI models are often formed from vast quantities of data harvested from the Internet (notably by web crawling), the question of the use of sensitive, personal or unauthorized data remains a point of vigilance.

Why is Human Control Necessary?

One of the main reasons why AI regulations require human control is the autonomy of these systems. AI, by definition, operates without immediate human intervention, which can lead to serious malfunctions if oversight mechanisms are not in place. The lack of clarity about the inner workings of AI systems also creates a climate of mistrust. This is why legislation imposes some form of human guarantee to ensure a minimum of transparency and accountability.

The aim is to prevent the “dehumanization” of human activities, by maintaining human control that can be more or less strong depending on the risks associated with AI applications. To achieve this, current legislation focuses on three types of “human guarantor”: the person in charge, the interlocutor and the observer.

  1. The person in charge: The complexity and autonomy of artificial intelligence means that there is a pressing need for a clear person in charge, capable of assuming the risks associated with its use. There are two levels of responsibility:
  • Ex ante liability: This requires AI designers and operators to take all necessary precautions upstream to avoid risks. This includes measures such as dataset supervision, regulation by design, and the adoption of strict ethical standards when designing systems.
    • Ex post liability: In the event of damage, liability must be clearly assigned. Victims need to know who to turn to in the event of malfunction or harm caused by an AI system. The European regulation of June 2024 tackles this issue by detailing the liability of AI designers, suppliers, and deployers.

The key here lies in identifying the responsible players. The legislator must determine who, among all those involved in the chain of creation and use of AI, is responsible for the risks and damage associated with its use. In Europe, this responsibility is divided between algorithm designers and end-users, providing a clearer division of obligations.

  • Interlocutor: The autonomy of AI leads to a dematerialization of relationships, which can have the effect of rendering users incapable of interacting directly with a human interlocutor. This poses risks of discrimination or bias, and undermines the transparency of decisions made by these

 systems. The solution envisaged by European and international regulations is to guarantee the presence of a human interlocutor in certain situations:

  1. Minimal intervention: In certain cases, a human interlocutor must be available to monitor the AI’s operation and intervene in the event of malfunction, but without systematically interfering in the AI’s decision-making process.
  2. Maximum intervention: In situations where the AI’s decision may have a direct impact on the individual’s life (for example, a medical diagnosis or a judicial decision), a human interlocutor could be responsible for making the final decision, thus excluding the AI from this process.

The choice of contact person is a key issue. The European regulation requires deployers to designate a natural person with the necessary skills and authority to intervene on the AI system. Furthermore, the possibility for users to refuse a fully automated decision (via an “opt-out” system, for example) is one of the most innovative proposals in terms of regulation.

  • The Observer: The role of the observer is to ensure that human control functions effectively and in accordance with legal requirements. This external guarantor must check that managers and interlocutors are fulfilling their obligations, and intervene when irregularities are detected. In this sense, the observer can be an independent authority with the power to investigate, verify compliance and sanction unlawful behavior.

The EU is planning to set up an Artificial Intelligence Office to oversee and coordinate national regulations. In Morocco, institutions such as the Commission Nationale de Protection des Données à Caractère Personnel (CNDP) could see their role strengthened, notably to ensure that AI applications comply with fundamental rights and data protection.

AI Regulation in Morocco: Between Opportunities and Challenges

In Morocco, the issue of AI regulation is rapidly emerging. The country has already laid solid foundations for framing the digital transformation through initiatives such as the Maroc Numérique 2030 strategy, as well as legislative proposals, the latest of which was presented at a public hearing by members of the opposition Haraki party, highlighting the major stakes of this technology and the associated risks.

The 17-article text was presented at a public hearing by members of the opposition Haraki party, highlighting the major stakes of this technology and the associated risks.

The draft law in preparation aims to guarantee the ethical use of AI, while protecting citizens against its risks. The draft is largely inspired by European regulations, while incorporating specific features adapted to Moroccan realities. Morocco is putting in place a legislative framework that will guarantee not only the security of personal data, but also respect for ethics in the use of emerging technologies.

So, although AI regulation in Morocco is still in the construction phase, the country seems resolutely committed to a proactive approach aimed at preventing abuses and maximizing the benefits of this digital revolution.

Artificial intelligence in the field of intellectual property: Innovation or confusion of rights?

The development of artificial intelligence has profound implications for intellectual property, particularly with regard to the creation of machine-generated works. Traditionally, works protected by copyright must be the fruit of the creative activity of a human author. However, with the emergence of generative AIs, capable of producing artistic, musical or literary works, a new issue is emerging: who owns the rights to these works? The tool, designed by man, or the machine that executes the instructions?

To date, international legislation, notably the Berne Convention for the Protection of Literary and Artistic Works, has not taken machine-generated creations into account. In the USA, for example, the Office of Copyright has already ruled that only human creations can be protected by copyright, leaving aside autonomous AI creations. In Europe, the question remains unresolved, although some legal experts are advocating an extension of copyright or the creation of a new type of protection for these AI-generated works.

When drafting their own digital legislation, Moroccan legislators will have to face this central question: how can AI-generated creations be integrated into a legal system that, historically, has been based on the idea of the human author?

Artificial intelligence represents a technological revolution that needs to be

rigorously managed. The issues at stake are manifold and require appropriate legislative responses, both in Morocco and in Europe.

Moroccan and European legislators will need to combine innovation and caution, drawing up regulations that promote the exploitation of AI’s potential while preserving citizens’ fundamental rights, particularly in terms of confidentiality, transparency and fairness. The future of AI in our societies will largely depend on the ability of legislators to anticipate possible abuses while maximizing the benefits of these technologies for all.

Sources :

  • European Commission, “AI Act: Artificial Intelligence Regulation,” 2021.
  • UNESCO, “Recommendations on the Ethics of Artificial Intelligence,” 2022.
  • CNDP, “Rapport annuel sur la Protection des Données Personnelles,” 2024.
  • Conseil Économique, Social et Environnemental, “Avis sur l’Intelligence Artificielle et ses Défis,” 2025.
  • Revue Dalloz IP 2024

Westfield Morocco is a Rabat-based legal and tax advisory firm specialized in assisting Moroccan companies with their internal growth operations (creation of subsidiaries, joint ventures) or external growth operations (acquisitions, mergers), their regular operations in contract law, or corporate law and their compliance projects (CNDP, RGPD) as well as their international litigation or arbitration.

WestField 50

Wassim Benzarti is a member of the Paris Bar and heads Westfield Morocco, a company specializing in legal and tax advice

“I mainly work with companies to help them turn a corner: either by restructuring to be more efficient, or by going international by setting up operations abroad, or by opening up their capital to new investors or in external growth operations via acquisitions or mergers”.

image

Aya is a fluently trilingual business lawyer with a Master’s degree in Business Law from Mundiapolis University and Université Côte d’Azur.

With an excellent academic background and a marked penchant for research, Aya has acquired significant experience in the fields of mergers & acquisitions, personal data protection, corporate law, industrial property, distribution law and competition law.

Commentaires

Posts les plus consultés de ce blog

Choosing a legal structure for a small or medium-sized business: SARL vs SAS

HIRING FOREIGN EMPLOYEES AND RESIDENCE PROCESS IN MOROCCO

The Complex Issues and Risks of Exchange Control in Morocco