-
What are your countries legal definitions of “artificial intelligence”?
Regulation (EU) 2024/1689, commonly referred to as the Artificial Intelligence Act, is directly applicable across all Member States of the European Union, including Malta. The Regulation introduces a harmonised definition of an “AI system,” which is described as: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (Source: Regulation (EU) 2024/1689 Of The European Parliament And Of The Council).
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Malta has taken proactive steps to align with and even anticipate EU-level regulation.
Founded in 2018, the Malta Digital Innovation Authority (MDIA) is a public body charged with advising and assisting the Maltese government on advancements in cutting-edge technology, such as artificial intelligence. In addition to leading legislative revisions to guarantee compliance with the requirements of Regulation (EU) 2024/1689 (the Artificial Intelligence Act), it is in charge of developing and revising Malta’s national AI policy, known as the Malta AI Strategy and Vision 2030, which outlines Malta’s long-term goals for ethical, transparent and socially responsible AI development.
This Strategy includes 22 action points related to education and the workforce, 6 dedicated to legal and ethical considerations, and 11 concerning ecosystem infrastructure. These objectives are currently being implemented by the MDIA in coordination with other governmental and quasi-governmental bodies. The strategy covers a wide range of initiatives; from equipping students and educators with AI-related competencies to establishing ethical frameworks and regulatory structures that encourage the trustworthy development and deployment of AI systems.
In 2019, the MDIA launched what it described as the world’s first national certification framework for AI systems, intended to promote the development of AI technologies in an ethically aligned, transparent, and socially responsible manner. This initiative, known as the AI Innovative Technology Arrangement certification programme, anticipated core elements of the EU AI Act by introducing a risk-based certification model. Under this scheme, developers and deployers of AI systems could obtain certification through MDIA-licensed technology systems auditors, who would assess whether the AI solution satisfied pre-established criteria and objectives.
Further, in 2022, the MDIA published guidelines for a Technology Assurance Sandbox to facilitate the safe testing and deployment of innovative systems. Malta’s sectoral regulators, including the Malta Financial Services Authority (MFSA) and the Malta Gaming Authority (MGA), have likewise monitored the integration of AI technologies within their respective sectors and have issued relevant guidance.
Malta’s institutional and policy architecture is further supported by collaborative initiatives, including the establishment of the Malta – European Digital Innovation Hub (EDIH). Funded in part by the European Union, this Hub provides support services to SMEs and public sector entities, including AI-focused training, workshops, and industry-academia engagement.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Malta has taken a proactive and strategic approach toward the regulation and governance of artificial intelligence, including implementation of guidelines, voluntary standards and ethical principles concerning artificial intelligence. Since the establishment of the Malta Digital Innovation Authority (MDIA) through the Malta Digital Innovation Authority Act (Chapter 591 of the Laws of Malta) in 2018, the country has laid the groundwork for a comprehensive AI regulatory framework. The MDIA was set up as a dedicated public authority to oversee and support innovation in emerging technologies, with AI forming a core focus of its current mandate.
Malta was among the first jurisdictions globally to introduce a national AI certification programme, aiming to ensure that AI technologies are developed in an ethically aligned, transparent, and socially responsible manner. This certification framework, embedded within the MDIA’s AI Innovative Technology Arrangement scheme, laid the foundation for a risk-based assessment of AI solutions, mirroring elements now reflected in the EU’s AI Act (Regulation (EU) 2024/1689).
Moreover, the MDIA has actively promoted the use of voluntary regulatory sandboxes. These frameworks enable innovators to test the conformity and resilience of their technologies under real-world conditions in anticipation of the future application of harmonised EU standards. In this context, the MDIA has also issued guidelines for a Technology Assurance Sandbox, further reinforcing Malta’s position as a testing ground for emerging technologies.
As a European Union Member State, Malta has been gearing up towards the application of the AI Act, together with the plethora of other EU legislation that bolsters the aspects of cyber security and resilience (principally DORA, NIS 2 and the Cyber Resilience Act), as well as AI product liability (namely the Product Liability Directive). The MDIA Act (Chapter 591 of the Laws of Malta) has recently been amended to allow for effective regulation of aspects of AI-use deriving from the AI Act, DORA implementation Guidelines have been published by the MFSA to assist licensed operators implement their obligations and the NIS 2 Directive has been transposed into Maltese law through the “Measures for a High Common Level of Cybersecurity across the European Union (Malta) Order, 2025.
Concurrently, Malta’s financial, gaming and data protection regulators (the Malta Financial Services Authority (MFSA), the Malta Gaming Authority (MGA) and the Information and Data Protection Commissioner (IDPC), respectively) have engaged with AI-related developments within their regulated sectors, closely following positions published by pan-European regulators and providing their own feedback and positions.
In sum, while Malta has not yet implemented national AI-specific legislation, it has established a robust ecosystem of laws, strategic guidance, voluntary regulation, and institutional support. This layered approach positions Malta to seamlessly integrate the forthcoming obligations under the EU AI Act while continuing to foster innovation in a responsible and ethical manner.
Malta has not attempted to legislate specifically on intellectual property matters relating to Artificial Intelligence. Although this has not yet been tested in a Court of law, in accordance with the prevalent position in the EU, it is expected that a Maltese Court would not treat the indiscriminate use of protected material for the purposes of machine learning as a copyright exception and neither will it treat AI-generated works as being copyrightable.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Under Maltese law, liability for defective AI systems—those that fail to provide the level of safety the public is entitled to expect—is governed by a combination of existing civil and criminal legal frameworks, as well as evolving EU legislation.
Directive 85/374/EEC on liability for defective products has been transposed into Maltese law through the Consumer Affairs Act. However, Directive (EU) 2024/2853, which introduces updates to the liability regime by explicitly extending strict liability to software and AI systems, has not yet been incorporated into domestic legislation. At the time of writing, no formal proposals have been tabled to amend Maltese law in this regard, though it is anticipated that transposition efforts will commence in the near future.
Moreover, Malta does not currently recognise AI systems as having separate legal personality. Therefore, liability cannot be attributed to the AI itself, but must instead be borne by natural or legal persons involved in its development, deployment, or use.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
In the Maltese legal system, the general principles of contract and tort law remain applicable to instances involving artificial intelligence. These principles are enshrined in the Civil Code and the Commercial Code (Chapters 16 and 13 of the Laws of Malta, respectively). A foundational tenet underpinning both areas of law is the obligation to act in good faith, consistent with the standard of the bonus paterfamilias.
Since artificial intelligence is usually thought of as a tool, the person or organisation using it is ultimately liable for any damage brought on by its use or misuse. Article 1033 of the Civil Code, which imposes liability on anybody who intentionally or carelessly violates a legal obligation and causes harm by an act or omission, is especially pertinent. Whether AI is used in a personal or professional setting, users are expected to exercise a duty of care, and this clause applies equally to both.
Importantly, users who rely on the so-called “black box” nature of some systems or claim ignorance of how the AI works cannot escape responsibility. Users are nevertheless required by law to take reasonable precautions to be aware of the risks and limitations of the technology they use.
Except in those circumstances where EU law applies a notion of strict liability on the manufacturers of AI systems, in accordance with set legal principles of Maltese procedural law the person or entity that suffers damages as a result of a third party’s use of AI systems would be expected to sue that party that caused the damage, rather than the manufacturer of the AI system in question. As mentioned earlier, Directive (EU) 2024/2853 has not yet been transposed into Maltese law.
From the standpoint of criminal law, non-human entities are not considered criminally liable under current Maltese legislation. Both actus reus (the act) and mens rea (the intent) are prerequisites for criminal liability in Maltese law. The latter is invariably linked to the human actor (usually the system’s developer, operator, or user), but the former may occasionally be ascribed to the activities of an AI system. So it is expected that, even within this area of application, responsibility will be ascribed to the users of AI systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
Under Maltese law, artificial intelligence systems are not recognised as having legal personality. As a result, liability for harm caused by such systems cannot be attributed to the AI itself, but must instead be borne by natural or legal persons involved in its development, deployment, or use.
In order to determine liability, the underlying cause of the injury and the relationship between the injured party and developer / manufacturer or deployer is crucial. The notion of strict liability of the developer / manufacturer will only be applied where this results from the law. Otherwise, the direct link in the chain of causation of the damages will determine the responsible party in the particular context.
The injured party retains the right to bring a claim directly against the party most closely connected to the cause of the harm, typically the deployer who exercised control over the system at the time of the incident. Depending on the contractual or tortious conditions, the deployer may pursue recourse against the developer for indemnity or contribution after fulfilling any liability, unless the claimant also takes action against the developer.
The allocation of liability thus hinges on a factual assessment of causation and fault. Maltese civil law, grounded in the principle of bonus paterfamilias and codified in Article 1033 of the Civil Code, requires each party involved in the lifecycle of the AI system to exercise due diligence appropriate to their role, whether in development, deployment, or use.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Under Maltese law, the burden of proof for a victim seeking compensation for harm caused by an AI system is governed by traditional civil liability principles, primarily Article 1033 of the Civil Code.
The victim is expected to satisfy the following burden of proof under Article 1033 of the Civil Code:
- Existence of Damage: The victim must demonstrate that they suffered actual harm/damage;
- Causal Link: The victim must prove a direct causal connection between the AI system’s operation and the harm/damage suffered;
- Fault or Negligence: The victim must show that the harm was caused by negligence, imprudence, or want of attention by a natural or legal person involved in the AI system’s lifecycle (developer, deployer, or user); and
- Identification of the Liable Party: The victim must identify the party most closely connected to the cause of the harm—typically the deployer who had operational control at the time
The New Product Liability Directive eases the burden of proof for victims seeking compensation. It creates a presumption of defectiveness and causation when proving these elements is ‘excessively difficult’ due to the product’s technical complexity, provided the defect and link to damage are at least ‘probable.’ Courts can also require defendants to disclose relevant evidence if the claimant’s case is plausible, helping to address information gaps. Defendants’ trade secrets must be protected, though the Directive does not detail how. These changes make it easier for victims to prove their claims, especially with advanced technologies like AI and medical devices. This Directive has not been transposed into Maltese law to date.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Malta does not currently have a standalone statutory framework that mandates insurance for AI systems. However, AI-related risks are generally insurable under existing liability and professional indemnity policies.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
According to the Maltese Patents and Designs Act (Chapter 417 of the Laws of Malta), the right to a patent is granted to the “inventor,” who must be a natural person, and where the inventor is in employment and the patent is linked to his employment the patent right is then assigned to the employer. A patent application can therefore only be filed by a natural person or a legal company. Therefore, Maltese law would not accept artificial intelligence as an inventor. Moreover, where AI is involved in the creation of the invention, a natural person must show significant involvement in the creation process in order for the invention to be eligible for patent protection. Currently, there are no Maltese court rulings that provide further guidance on this matter.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Under Maltese law, AI systems, as computer programs and algorithms, receive copyright protection as literary works under Article 2 of the Copyright Act (Chapter 415 of the Laws of Malta), provided they possess originality. Additionally, databases compiled for training AI models may be protected under sui generis rights if there has been a substantial investment in obtaining, verifying, or presenting their contents (Article 25).
However, for AI-generated images or works, copyright protection does not automatically apply. The law requires that an “author” be a natural person. Thus, AI-generated works do not qualify for copyright unless a natural person can demonstrate substantive participation in the creation process. There is currently no Maltese case law clarifying this issue.
Moreover, the prompts used to generate AI works can be protected as trade secrets under the Trade Secrets Act (Chapter 589 of the Laws of Malta), if they meet certain criteria such as secrecy, commercial value, and reasonable measures to keep them confidential.
Finally, users should be aware of potential liability for infringement of third-party rights embedded in AI training data, even if the infringement is unknown to them. Compliance with licensing conditions, for instance in using platforms like OpenAI, is also required.
At present, no legislative measures have been taken in Malta to specifically protect AI-generated works independently of human authorship.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
Malta has not yet enacted standalone legislation specifically regulating the use of AI in employment contexts such as hiring, performance evaluation, or employee monitoring; rather Malta relies on a combination of the following measures:
- EU-level legislation, through the AI Act and GDPR; and
- Malta’s AI Strategy and Vision 2030, which promotes ethical AI adoption and workforce readiness. This policy framework promotes ethical AI adoption and workforce readiness, with action points focusing on equipping workers with digital skills, anticipating automation’s impact on the labour market, and increasing AI awareness at all education levels.
Under the AI Act the use of AI for employment is deemed a “high risk” use and, consequently the developers and deployers of AI systems that are used for this purpose need to ensure compliance with their obligations resulting from the AI Act. Moreover, the use of AI systems for fully automated decision-making relating to employment matters would fall within the purview of Article 22 of the GDPR requiring fair and transparent processing, based on legitimate grounds and the implementation of technical and organisational measures to ensure appropriate processing and the minimisation of errors.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
The various privacy concerns raised in connection with the creation and application of artificial intelligence (AI), and large language models (LLMs) in particular, have been the focus of much debate.
Training datasets, which are frequently scraped from online sources without people’s knowledge or agreement, may contain personal data as defined by GDPR Article 4(1). This calls into question whether such processing is legal, particularly when the controller’s “legitimate interest” (Article 6(1)(f) GDPR) is cited as the legal justification. The principles of purpose limitation and data minimisation under Article 5(1) GDPR, also weigh in on this matter since training the models would not fall within the original purpose for which the personal data was gathered and processed.
Therefore, subsequent applications of the AI system may become problematic if personal data is processed illegally during training, as the outputs may be tainted by the initial illegality.
These problems are not limited to a single jurisdiction. Given that AI technologies are cross-border in nature, they affect data subjects in multiple EU/EEA countries, necessitating a coordinated interpretation and response by data protection authorities. In its Opinion 28/2024, the European Data Protection Board (EDPB) has acknowledged the new and systemic threats that artificial intelligence (AI) poses to the GDPR framework, reaffirming the need for a unified strategy to ensure that technological advancements are pursued in a way that upholds fundamental data protection rights.
As mentioned earlier, the use of AI systems for fully automated decision making in aspects that could affect the life and fundamental rights of a person would fall within scope of Article 22(1) of the GDPR. The ECJ judgment in the Preliminary Reference of “OQ v Land Hassen” (Case C-634/21) provides a through understanding of the implication and application of these principles.
Malta has not legislated in accordance with Article 22(2)(b) GDPR to allow for decisions based solely on automated processing of personal data.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Scraping for machine learning purposes raises various legal questions.
In the context of copyright, Malta has transposed the text and data mining (TDM) exception to copyright and database rights, found in Article 4 of the Copyright and related rights in the Digital Single Market Directive (EU 2019/790) quite literally through Regulation 5(1) of the Copyright and Related Rights in the Digital Single Market Regulations (Subsidiary Legislation 415.08). This allows for the reproduction of copyrightable material for the purposes of text and data mining (meaning the automated analytical techniques aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations) save where the right holders expressly reserve their right in an appropriate manner, such as machine-readable means in the case of content made publicly available online. This exception applies in those instances which would not conflict with a normal exploitation of the work or other subject-matter and do not unreasonably prejudice the legitimate interests of the right holder.
Data scraping presents significant issues from a competition law standpoint, especially when it is utilised to obtain economically sensitive information from a competitor’s online platforms. The potential of distorting competition, which could consequently lead to collusion, could occur even involuntarily where AI systems are used to set prices. This is due to the fact that the machine learning process is based on the scraping of market player data and could therefore lead to a situation where, by basing prices on the algorithm, prices get fixed at similar levels and competition becomes limited.
In line with the GDPR, any scraping involving personal data triggers the obligations of a data controller. In order to scrape data off the internet, one must have a valid legal basis (usually based on the ground of legitimate interest) and comply with transparency, data minimization, accuracy, storage limitation, and integrity requirements. Data subject rights such as providing notice, access, rectification, erasure and objection are also mandatory.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
To date, there is no guidance from the Maltese Courts, through jurisprudence, in relation to the means in which right holders can exclude the TDM exception to copyright. It is expected that a system whereby the user of a site has to accept the terms and conditions of the website in order to access the site would pass the test of Regulation 5(2) of S.L. 415.08. Short of such an evident limitation, it would be more problematic to argue that the terms of website use bind the visitors of such site.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Information and Data Protection Commissioner (IDPC) was formally appointed as the Market Surveillance Authority (MSA) and the Fundamental Rights Agency (FRA) for the purposes of the EU Artificial Intelligence Act.
Rather than issuing its own opinions or guidance, the IDPC has been referring to and publicising the opinions and directions of the EDPB and the European Commission on data protection in the area of Artificial Intelligence.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
There are no reported decisions by the IDPC in relation to the development and/or use of AI.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
There are currently no publicly reported Maltese court decisions that directly adjudicate liability or address legal questions arising from the use of AI systems.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
In Malta, the supervision and regulation of artificial intelligence is primarily entrusted to the Malta Digital Innovation Authority (MDIA), which is responsible for formulating national AI policy and advising the government on AI-related matters. The MDIA plays a central role in implementing Malta’s AI strategy in collaboration with various public and private stakeholders.
In sector-specific contexts, other authorities are also expected to take an active role. The Malta Financial Services Authority (MFSA) and the Malta Gaming Authority (MGA) are anticipated to provide regulatory oversight in the financial services and gaming industries respectively (both of which are key to the Maltese economy). Similarly, the Ministry for Health and Active Ageing, through its specialised units, will oversee AI applications in healthcare, while Transport Malta will regulate the deployment of autonomous vehicles, drones, and other AI-enabled modes of transport.
Data protection and fundamental rights oversight fall within the remit of the Office of the Information and Data Protection Commissioner (IDPC) which, as mentioned earlier, has been appointed as MSA and FRA.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
Artificial intelligence is now widely integrated across Malta’s public and private sectors, with adoption accelerating significantly in recent years. The business sector, particularly financial services, gaming, and healthcare, has experienced substantial transformation through AI-driven solutions.
The government has also stated that it has used AI as a strategic tool in the public sector to tackle persistent issues. The use of sophisticated traffic control technologies, spearheaded by Transport Malta, is a noteworthy illustration. Through AI-powered data analysis, this pilot project seeks to optimise public and private travel planning, monitor and enforce road usage, and lower emissions and traffic.
AI is being used in the healthcare industry to improve patient care and operational effectiveness. An AI-based forecasting program is being piloted by the Central Procurement and Supplies Unit (CPSU) to assist with pharmaceutical inventory management and procurement planning. Additionally, in February 2025, Malta became a part of the EU4Health-funded “BreastScan” project, a comprehensive four-year program that uses AI in radiology to increase the precision and speed of breast cancer diagnosis.
Another important area for AI innovation is education. The Digital Education Strategy 2024–2030, which establishes a national path for incorporating AI and digital tools into schools, was adopted by the Ministry of Education in May 2024. The approach places a high priority on community involvement, teacher training in technology-enhanced learning, digital literacy, and fair access to digital resources including computers and tablets. The creation of a sustainable digital infrastructure is also emphasised.
Tourism and utilities are likewise embracing AI solutions. The Malta Tourism Authority is developing a Digital Tourism Platform designed to harness visitor data to enhance service delivery. The Ministry for Energy, Enterprise, and Sustainable Development launched a pilot project which employs AI algorithms to track and optimise usage trends in the water and energy sectors. Large-scale analytics, machine learning, and predictive maintenance are used by the system to guarantee effective resource management across national grids and enhance customer service responsiveness.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Yes, AI is used in Malta’s legal sector, primarily in the form of legal tech tools that assist with research, drafting, and administrative tasks. However, the adoption of AI-driven legal tools is not yet widespread.
Despite increased interest, there are currently no clear guidelines on the use of AI in legal practice from the Chamber of Advocates, the representative body of lawyers in Malta, or the Committee for Advocates and Legal Procurators, the regulating authority. The use of generative AI and its effects on legal privilege and professional secrecy are two major ethical issues that still need to be addressed. In accordance with the Code of Ethics, the Professional Secrecy Act (Chapter 377 of the Laws of Malta), and the Code of Organisation and Civil Procedure (Chapter 12), lawyers remain bound by their ethical and legal obligations until explicit regulations are adopted.
AI is regarded as a helpful but somewhat risky tool that does not reduce lawyers’ professional responsibilities. In the absence of local regulation, practitioners are looking to foreign guidance, such as that issued by the UK Bar Council, to inform their use of AI responsibly.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI presents numerous opportunities for legal professionals. AI-powered tools can analyse vast amounts of data with remarkable accuracy, such as identifying key documents and extracting pertinent information.
Difficulties encountered when making use of AI-powered tools include bias in training data which may be replicated by AI systems, which could lead to biassed or discriminatory results. Furthermore, many AI systems are “black box” in nature, making it difficult to understand the reasoning behind their choices and making efforts to guarantee accountability and transparency more difficult. There are ethical issues, too; lawyers are still accountable for choices that are impacted by insights produced by AI. Various examples of lawyers citing bogus judgments in foreign jurisdictions are in the public domain. This highlights the risks of “hallucinations” linked to AI. Moreover, the potential of breaches of confidentiality and client privilege, which could include financial information, case files, and client records, is heightened by the growing dependence on AI.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
It is expected that the coming 12 months will considerably shape the regulatory framework relating to the use of AI. Regulators would be expected to issue guidelines and increase the level of coordination between them whilst industry players will prepare themselves for the coming into force of the various obligations relating to their use of AI.
Malta: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Malta.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?