Healthai Chief AI Officer Alberto – Dr. Giovanni Busetto – Interview Series

Dr. Alberto-Giovanni Busetto is a Swiss-Italy AI executive and innovator and Chief AI Officer of Healthai. He is a member of the World Economic Forum’s Global Future Council, previously served as the first global data science director at Merck’s healthcare company Data Science & AI, and the first group of data and AI leaders at the ADECCO Group.
Throughout his career, Alberto-Giovanni has been recognized as the United Nations Global Compact Mentor Merck Digital Champion and IBM Outstanding Speaker. He is a global AI governance of the World Employment Federation mission member and is honored by the United States’ national engineering enemies, becoming one of the outstanding early career engineers in the United States. In addition, he served as the chairman of the U.S. Big Data in the Japanese-US engineering field.
What inspired your transition from your AI leadership role to major companies like Merck?
Hello, I am Dr. Alberto-Giovanni Busetto, Chief AI Officer of Healthai – Global Responsible AI Health Agency. I have over 20 years of experience in the design, development, deployment and management of AI solutions. The hallmark of my career is to use AI to achieve meaningful impact. At Merck Healthcare, I served as Chief Director of Global Data Science and AI, where I led AI-driven healthcare and biotech solutions. This role emphasizes the change potential of AI in the health sector.
The transition to Healthai has allowed me to focus more on responsible health deployments, aiming to bridge the gap between technological innovation and people-centered care. In this regard, data governance is also one of my core priorities. In the context of the regulation and innovation sandbox, what we are really excited about on Healthai is the work we do, and we are building blueprints to accelerate and adopt responsible AI to enable innovators to adopt at scale.
I was driven by a problem: How do we make AI not only smarter, but really useful where it matters most, to make people’s lives better? At Healthhai, I have the opportunity to shape how we view AI in health and provide guidance to governments and health agencies on deploying not only cutting-edge AI solutions that are not only cutting-edge, but ethical, transparent and deeply integrated into the health needs of the real world. For me, it’s not only about algorithms, but also about impact.
What makes you most excited about the intersection of AI and health?
The convergence of artificial intelligence and health provides opportunities to improve outcomes through improved diagnosis, personalized treatment and optimized health solutions. The ability of AI to analyze complex data sets can, for example, lead to earlier disease detection and more accurate prognosis.
What excites me the most is that these advancements are not only reserved for high-income countries—they also have the potential to change the health sectors of low-income and middle-income countries. AI-driven diagnosis can bring professional-level insights to regions with limited medical expertise, predictive analytics can help allocate the resources most needed, and digital health care tools can bridge gaps in access to care. By deploying AI responsibly, we can create more equitable health solutions that serve people everywhere regardless of geographic or income level.
How can AI help narrow the health gap between high- and low-income countries and middle-income countries? What challenges are faced when ensuring fair access?
Artificial intelligence can democratize health by enabling advanced medical insights worldwide. In areas with limited medical expertise, AI-driven diagnostic tools can help accurately identify diseases. For example, AI-enhanced telemedicine platforms can facilitate knowledge transfer and improve quality of care by having experts in remote areas around the world.
Ensuring fair access to AI-driven health solutions involves addressing some of the challenges, some of which are most prominently related to infrastructure constraints. This may hinder the adoption of AI technology, as many regions may lack the necessary digital framework to support advanced solutions.
Data privacy issues are also still crucial – protecting personal information requires a strong governance structure to ensure confidentiality and security. Additionally, AI systems must be tailored to local languages and cultural contexts to be truly effective and overcome obstacles, otherwise accessibility can be limited.
Regulatory barriers further complicate the landscape, as the proper balance between promoting innovation and ensuring privacy plus security requires thoughtful policy development. By proactively addressing these challenges, AI can be a powerful tool to improve global health equity.
How important is cooperation between governments, technology companies and health providers in ensuring the development and deployment of AI?
Collaboration between governments, technology companies and healthcare providers is not only beneficial, it is crucial for responsible development and deployment of health. These partnerships can create comprehensive frameworks to address ethical considerations, protect data privacy and establish operational standards to ensure AI is both effective and trustworthy.
By working together, stakeholders can go beyond fragmented and some level of all-round solutions, instead developing AI-driven approaches tailored to the health needs of the real world. This means leveraging AI to enhance diagnosis, simplify clinical workflows and expand access to quality care – especially in underserved areas. Furthermore, collaboration promotes transparency and accountability, ensuring that AI remains a tool for empowerment rather than exclusion.
When innovation is driven by shared responsibility and aligned with public health priorities, AI has the potential to reshape our health approaches in a transformative and equitable way.
What ethical considerations should be at the forefront of AI-driven health solutions?
When we consider ethical considerations in AI-driven health solutions, we need to focus on some key areas of focus. First, we must address mitigation bias to ensure that AI models do not inadvertently exacerbate existing health disparities. Transparency is also crucial, because the decision-making process behind AI must be clear and understandable to all relevant parties. Next is accountability, with clear responsibility for any decisions made by the AI system, so if things go wrong, you can take responsibility.
Autonomy is at the heart of ethical AI in health, which means we should not only respect people, but also actively support people’s right to make informed decisions about their care. It’s not just about providing information, it’s about making sure people are fully aware of their choices, potential risks and benefits, and the role that AI plays.
We must ensure that users who benefit from AI or those who are capable and confident in their choices know that they have a say in the technology that affects their health and well-being.
Healthy models sometimes show bias. How can regulators and AI developers mitigate this risk?
To alleviate bias in AI models, it is necessary for regulators and developers to focus on building systems representing different populations. This first collects data from a wide population, so AI is trained to inform the patient’s real-world diversity. But data alone is not enough – continuous monitoring is also critical, as AI systems should be regularly evaluated to identify and correct any biased patterns that may emerge over time.
Involving a variety of stakeholder participation in the development process, including ethicists, patient representatives, clinicians, medical experts, etc., can provide valuable perspectives to help ensure that AI models are fair and equitable for everyone.
How can governments and organizations ensure AI-driven health solutions use patient data responsibly?
To ensure that AI-driven health solutions use patient data responsibly, governments and organizations need to implement strong data governance policies that clearly outline how patient information is collected, stored and shared.
Anonymity also plays a central role in protecting people’s identities, ensuring that data can be used without compromising privacy. Most importantly, compliance with international and local data protection laws is not only crucial to maintaining trust, but also to ensure that AI systems operate within the legal scope. This approach helps build a foundation of security and transparency, benefiting people and the health sector as a whole.
What are the biggest obstacles to regulate the health of AI? How do countries overcome them?
The rapid speed of technological progress is the first to think of. Regulation often struggles to keep up with the pace of AI innovation, thus leaving a gap in supervision. Furthermore, decision makers need to have a deeper understanding of the unique complexities of AI technology and health to develop effective informed regulations.
Another hurdle is the lack of global standardization – consistent regulations from countries, it is difficult to promote international cooperation and ensure that AI solutions can be deployed safely and ethically around the world.
To overcome these obstacles, countries will need to invest in the ongoing education of policy makers, work to coordinate internationally and remain agile to adapt to new technological developments.
Countries can address these barriers by engaging in ongoing dialogue between technicians, health professionals and regulators and by investing in education and training programs that bridge knowledge gaps. Healthai, for example, is addressing this issue through its Global Regulatory Network (GRN), which enhances local capabilities and capabilities in AI health to ensure global regulators have the ability to manage the evolving landscape of AI in health worldwide.
How does Healthai help countries establish and authenticate responsible AI verification mechanisms?
As an implementation partner, Healthai works with the government, the Ministry of Health and other health organizations to not only navigate the safety and efficacy of AI-powered health tools, but also adhere to their ethical compliance, ensuring that the technology meets regulatory requirements and social value.
Healthai supports the development of a strict certification process to help establish trust and accountability in AI Health Solutions. By doing so, we ensure that AI systems meet the highest standards before deployment, which is critical to protecting benefits, enhancing beneficial outcomes, and strengthening global confidence in AI’s responsibility to use AI.
How to use AI to predict, track and manage future health crises?
AI can play a key role in managing health crises by providing more effective tools to predict, track and manage outbreaks. Through predictive analytics, AI can analyze a large number of epidemiological data to identify patterns and trends that may indicate the potential of the outbreak before it occurs, thus giving the authorities time to prepare. In addition, real-time monitoring powered by AI can continuously monitor health data from hospitals, clinics and other sources, allowing the ability to quickly spot emerging threats and respond quickly to contain them.
AI can also help optimize resources during crisis, helping authorities more effectively allocate medical supplies, personnel and other critical resources to ensure they are used where they are most needed. By integrating AI into public health strategies, countries can improve their ability to anticipate and respond to future health emergencies, thereby increasing readiness and resilience in the face of evolving threats. This positive approach can save lives and help minimize the overall social and economic impact of health crises.
Thank you for your excellent interview and hopefully learn more readers should visit Healthai.