Governing the future: bridging the gap between artificial intelligence, the law and regulation

Monday 2 June 2025

Elvan Sevi Bozoğlu
Bozoğlu İzgi, Istanbul
sevi.bozoglu@bi.legal

Ayşe Yakarışık
Bozoğlu İzgi, Istanbul
ayse.yakarisik@bi.legal

The world is progressing at a pace beyond the comprehension of the average human mind and, almost every day, new discoveries are made that profoundly challenge previously established knowledge. With the advancement of technology, artificial intelligence (AI) has become increasingly integrated into daily life and has become a significant component of modern society.

Alongside the concerns it raises about our future, it is evident that AI technologies facilitate human activities, enhance work efficiency and automation, improve and accelerate decision-making processes, foster creativity and reshape various sectors, such as healthcare and education.

Despite its numerous advantages, AI also presents several challenges. These include ethical and legal concerns, increased unemployment, the proliferation of manipulative technologies, such as deepfakes, and the potential threat of uncontrollable AI systems. At the core of these concerns lies a lack of ethical and legal regulations. Alongside the positive and negative implications of AI integration, uncertainties and anticipated risks underscore the need for appropriate regulatory frameworks. The advancement of technology, the significant impact of AI on various industries and the global efforts by certain sectors to integrate AI into their operations necessitate the regulation of AI by countries through legal frameworks and regulatory measures.

Europe’s legal approach to AI

To address the risks associated with AI, particularly in areas such as healthcare, security and fundamental rights and freedoms, and to establish global standards in response to its rapid development, the European Union has taken a significant step towards AI regulation. On 21 April 2021, the European Commission published a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence. This initiative is considered a crucial milestone in AI governance. Subsequently, the EU AI Act, which came into force on 1 August 2024, has become a significant and pioneering regulatory framework for many countries. Aimed at fostering the secure and transparent advancement of AI technologies, the AI Act seeks to protect users’ rights and freedoms, while ensuring ethical AI development and safeguarding personal data. The AI Act adopts a risk-based approach, which plays a vital role in safeguarding personal data and protecting fundamental human rights. Its inclusion of strict regulations, such as the prohibition of AI applications that present unacceptably high risks, demonstrates the comprehensive scope of the AI Act’s regulatory framework.

Turkey’s legal approach to AI

Although Turkey lags behind in regard to the regulation of AI technologies, it ranks among the countries with above-average integration of AI use in the private sector. In 2019, Presidential Decree No.48 established the Directorate of Artificial Intelligence Applications under the Presidency’s Digital Transformation Office.[1] This institution is responsible for developing strategies related to AI applications and supporting the administrative and technical coordination of public institutions and organisations in regard to the use of AI.

Following the approval of the AI Act, the Draft Artificial Intelligence Law was presented to the Grand National Assembly of Turkey on 24 June 2024. During the evaluation of the Draft Law, the Presidential Digital Transformation Office announced an update to the National Artificial Intelligence Strategy 2021-2025 Action Plan. The revised National Artificial Intelligence Strategy 2024-2025 Action Plan outlines key objectives, including fostering AI research and improving access to high-quality data and innovation. This update reflects Turkey’s ongoing commitment to AI regulation and marks a significant step in strengthening its engagement both nationally and internationally. Despite this, the Draft Law prepared based on the EU’s AI Act is quite limited in scope.

While inspired by the EU AI Act, the Draft Law is notably more limited in scope, focusing on specific issues, whereas the EU AI Act provides a comprehensive regulatory framework addressing multiple aspects of AI governance. The Draft Law fails to thoroughly address fundamental rights and freedoms, lacks essential risk assessment mechanisms and remains significantly out of touch in regard to global discussions on AI regulation. The most significant difference between the EU AI Act and the Draft Law proposed in Turkey is the absence of a risk-based approach. The AI Act classifies unacceptable risk as the highest level of risk, encompassing AI applications that exploit individuals’ vulnerabilities or scrape facial images, all which conflict with EU fundamental rights. Aligning the legal framework for AI that is being developed in Turkey with the EU’s approach and adopting a risk assessment perspective will not only prevent the need to reinvent the wheel, but will also help ensure that Turkey moves forward with a legal understanding compatible with its most significant trading partner. Given the legal differences between the AI Act and the Draft Law, it essential to expand the scope of the Draft Law and address the gaps in more detail.

The role of AI in the healthcare sector

As we have all witnessed, AI is increasingly being used in various areas of the healthcare sector. Today, AI is facilitating tasks such as information synthesis, processing patient data, managing complex medical records, such as gene sequencing and MRIs, handling large volumes of medical literature and ensuring effective data management.[2] Moreover, AI improves human performance by supporting clinicians in diagnosing uncommon diseases, minimising mistakes and handling complicated treatment interactions.[3] As the number of patients with multiple co-morbidities increases, AI contributes to more personalised and efficient patient care.[4] As already witnessed, the potential applications of AI in healthcare are vast. This field also extends into areas such as privacy, research, patient consent, autonomy, accountability and AI-based diagnostics.[5]

However, alongside the convenience AI brings and its acceleration of technological advancements, there are unique challenges to regulating AI specific to the healthcare sector. These challenges stem from the inherent complexities of the healthcare sector itself.[6]

AI applications in healthcare depend on data that is often considered private and sensitive.[7] In addition to this, AI technologies can analyse indirect data, such as social media activity and search history, to deduce an individual’s health status.[8] Despite its advantages, AI presents risks such as being used for covert monitoring, data leaks and cyberattacks. This highlights the necessity for governments and researchers to anticipate and mitigate these potential abuses.[9] Suggestions for improving the regulations on AI applications in healthcare include safeguarding health data, reducing risks and fostering internal collaboration to adopt unified standards under the World Health Organization (WHO). As noted above, EU law offers a framework for updating the International Health Regulations, providing valuable guidance for global regulation efforts.[10]

On the other hand, there is the matter of processing sensitive personal data. In the healthcare industry, most of the data processed is considered sensitive, as it pertains to health information. According to Article 6(1) of Personal Data Protection Law No. 6698, personal data related to individuals’ health, sexual life and genetic data is classified as sensitive data.[11] The protection of personal data is interconnected with legal regulations concerning the use of AI and laws related to liability.

From a legal perspective, particularly when utilising AI applications, including generative AI tools, extra attention must be paid to data security. Given that legal regulations for these platforms are still incomplete and the technology is still evolving, various initiatives are being developed to ensure that sensitive personal data is not compromised. In this context, the Turkish Institute for Health Data Research and AI Applications has been established, and a Regulation on the Structure and Operation of the Turkish Institute for Health Data Research and AI Applications has been created to govern its activities. This institute aims to enhance Turkey’s competitive edge in health data research and AI applications, address the scientific and technological needs to improve the effectiveness of healthcare services, conduct innovative and pioneering research and contribute to the determination of priority policies in the field of health data research and AI applications.[12]

Considering all this, the protection of personal data is both challenging and crucial, especially in the healthcare sector. As human errors are often responsible for data breaches, healthcare organisations should provide comprehensive training and conduct regular risk assessments to address security vulnerabilities.[13] Using tools like virtual private networks (VPNs), limiting access to certified personnel, and implementing two-factor authentication and role-based access control systems can significantly improve data security and protect against cyberattacks and unauthorised access.[14]

Lawmaking should be proactive, not reactive, to ensure a safer future…

Technological advancements continuously build upon what has already been developed and implemented, extending towards areas that remain unexplored.[15] The crucial aspect lies in utilising technology in ways that benefit human life, while simultaneously establishing appropriate legal frameworks and supporting regulations. As technology continues to outpace legal frameworks, regulatory strategies must evolve over time, either due to their inefficacy or in response to emerging challenges, such as new risks, risk creators or newly established objectives.[16]

The law highlights the necessity of adapting to innovations in order to keep pace with technological advancements. As of today, we see that the law has not been able to keep up with the rapid progress of AI. However, it is crucial that this situation is reversed as soon as possible. As management thinker Russell L Ackoff noted, increasing efficiency using the wrong approach only amplifies errors.[17] It is preferable to execute the right strategy imperfectly rather than perfecting the wrong one.[18] When mistakes are made in the right direction and are subsequently corrected, progress is achieved.[19] Where there is law, there is security. Once security is violated, the law that follows will either be overly restricted and obstructive, or insufficient.

Notes


[1] T.C Cumhurbaşkanlığı Dijital Dönüşüm Ofisi, Ulusal Yapay Zekâ Stratejisi 2021-2025, 2021, p. 39, https://cbddo.gov.tr/SharedFolderServer/Genel/File/TR-UlusalYZStratejisi2021-2025.pdf last accessed on 9 May 2025.

[2] Eileen Koski, Judy Murphy, AI in Healthcare, Volume 284: Nurses and Midwives in the Digital Age E-book, 2021, p.297

[3] Ibid. p.297

[4] Ibid. p.297

[5] Rabai Bouderhem, Shaping the Future of AI in Healthcare Through Ethics and Governance, Humanities & Social Sciences Communications, 11, Article Number: 416, 2024, p.1.

[6] Ibid. p.1

[7] Nuffield Council on Bioethics, Artificial Intelligence (AI) in Healthcare and Research, Bioethics Briefing Note, p.6.

[8] Ibid. p.6.

[9] Ibid. p.6.

[10] Ibid. p.1.

[11] Personal Data Protection Law No.6698, Official Gazette No. 29677, 7 April 2016.

[12] The Regulation on the Organisation and Implementation of Activities of the Turkish Institute of Health Data Research and Artificial Intelligence Applications, Official Gazette No. 31776, 12 March 2022.

[13] Rabai Bouderhem, Shaping the Future of AI in Healthcare Through Ethics and Governance, Humanities & Social Sciences Communications, 11, Article Number: 416, 2024, p.5.

[14] Ibid. p.5.

[15] Langdon Winner, The Whale and The Reactor (University of Chicago Press 1986) 174.

[16] Baldwin, R., Cave, M., & Lodge, M, Understanding Regulation: Theory, Strategy, and Practice, Oxford University Press, p. 132.

[17] Ibid. 133.

[18] Ibid. 133.

[19] Ibid. 133.