In recognising the transformative potential of AI in healthcare (e.g. shortening the research and development timeline in analysing vast datasets to identify potential drug candidates and predict their effectiveness), but also its risks (e.g. personal data disclosure after input data is ingested by an AI model), Singapore set up Artificial Intelligence in Healthcare Guidelines in 2021 to regulate its adoption to ensure safety and quality. Since then, the Guidelines have been updated to version 2.0, dovetailing with Singapore’s Health Sciences Authority (HSA) attaining the highest World Health Organisation (WHO) maturity level of ML4 as a medical device regulation authority; and Singapore launching the world’s first model AI governance framework for agentic AI, which provides guidelines on deploying AI agents responsibly.
Retaining focus on trustworthiness, practicality, and risk-based governance rather than restrictive legislation, version 2.0 addresses developments, such as Gen AI, to better support innovation, while preserving ethics and values of the healthcare profession. The updated guidelines seek to enable healthcare institutions to build and implement clinically safe and effective AI solutions. Regulatory sandboxes will be set up to help evaluate these AI solutions in real-world healthcare settings.
Acknowledging that registration of new drugs involves costly and time-consuming early phase clinical trials, HSA appears amenable to validation through laboratory-generated simulated data. HSA will take a technology-neutral approach to regulation, applying the same rigour to AI-developed drugs as it does to conventional drugs. HSA thus has to examine the AI model, validate the data pipelines, ensure transparency of data sources, and objectively evaluate the quality, reliability and reproducibility of the drug.
In this context, the 42-page AIHGle 2.0 document identifies, over sections 2.2.1 to 2.2.2, AI solutions that employ Machine Learning (ML) or Deep Learning (DL) algorithms for possible further scrutiny, for the former presenting risks such as model drift and the latter for bias, hallucination, and data disclosure.
It is envisaged that even if existing ML or DL algorithms are used as part of the AI drug validation process, arriving at fit-for-purpose AI solutions require customisation for integration into existing technical infrastructure and clinical workflows. Such adaptations - together with the implementation of appropriate measures to comply with applicable personal data protection laws and principles when training datasets include personal data - present compelling opportunities for targeted patenting strategies. This is particularly so given the nascent nature of the AIHGle version 2.0, with the potential for the emergence of industrial standards akin to standard‑essential patents as the healthcare industry pivots towards leveraging AI in drug registration.
As of 10 March 2026, HSA has yet to receive an application for registration of an AI-developed drug. This could thus be an opportunity for a first mover advantage.
Our experienced patent attorneys are well versed to guide you through the nuances of this evolving area. For strategic advice or filing support, please feel free to contact us directly.

