Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:
Resume:
1.- Simone Campus welcomes everyone to the workshop wrapping up 5 years of work by the focus group on AI for Health.
2.- Samir Pujari is excited about the next stage strengthening workshops as they generated new ideas for discussions.
3.- Thomas Wiegand gives an overview of the focus group's work addressing the shortage of health workers using AI.
4.- The focus group involved people from medicine, machine learning, public health, government regulation, ethics, and other areas like economics.
5.- The aim was documenting best practices, establishing standards, and enabling people worldwide to create AI for health solutions.
6.- The focus group held 19 meetings around the world and transitioned online before COVID, becoming more efficient.
7.- Working groups created cross-cutting best practices and reference documents applied by topic groups on specific AI for health use cases.
8.- The focus group produced over 1000-2000 pages of standardization and guidance documents.
9.- 24 topic groups were established representing medical/health use cases that can benefit from AI, bringing together experts and data.
10.- A software assessment platform was developed to benchmark AI solutions using contributed and withheld data.
11.- The focus group reached out through webinars, workshops, and publications as part of their outreach program.
12.- Ruth Malpania presents an overview of the ethics and governance work done over the last few years.
13.- The ethics working group aimed to maximize benefits from AI while addressing potential ethical challenges and harms.
14.- Key ethical principles were developed as a framework for guidance and regulation of AI for health.
15.- Recommendations were provided on how to govern AI for health to address current gaps in laws and regulations.
16.- The ethics guidance has been disseminated through an online curriculum, regional workshops, discussions with companies, and application by health agencies.
17.- Additional work is being done on large language models, AI in pharmaceutical R&D, and developing an ethics curriculum for designers/programmers.
18.- Shada Salah Ali presents an overview of the regulatory considerations working group.
19.- The group aimed to bridge gaps between regulators and developers to facilitate approval of safe, effective, and accessible AI.
20.- 50 members from 28 countries, mostly regulatory agencies, provided diverse regional perspectives in the working group.
21.- 18 recommendations were developed across 6 topic areas: documentation, risk management, validation, data quality, engagement, and data protection.
22.- An online course and regional implementation of the regulatory guidance are planned as next steps.
23.- Mark Landry and Verat Baekelandt present the work of the data and AI solution handling working group.
24.- The group designed an end-to-end process and platform for building and assessing health AI algorithms globally.
25.- A decentralized data processing approach was used to bring computation closer to data storage locations.
26.- Data hubs were developed as a blueprint that can interconnect to create a worldwide network offering federated capabilities.
27.- The platform, called Open Code Initiative, supports the full process with security, privacy, and adaptation to local requirements.
28.- It facilitates comparison of algorithms across different data aggregation levels and enables data sharing for collaboration.
29.- Andrew Farlow presents the work of the collaborations and outreach working group over the last two years.
30.- The group aimed to foster collaborations, promote outreach, increase expertise, strengthen local intelligence, and improve government buy-in and evaluation frameworks.
31.- Many webinars, workshops, reports were produced in partnership with country groups and on topics like vaccine access and antimicrobial resistance.
32.- Regional meetings were held in Cameroon and Sri Lanka to build capacity and work with local partners.
33.- Local innovation capacity and inclusion of end users in the design of challenges and solutions was emphasized.
34.- Luis Oala presents an overview of the data and AI solution assessment methods working group.
35.- The group aggregated people, practiced and evangelized AI assessment methods, and connected with other groups doing similar work.
36.- An assessment platform and process was developed in collaboration with the Open Code Initiative and WHO.
37.- Lessons learned include the need to identify mature AI groups, integrate with devices, and curate public good AI solutions.
38.- Looking ahead, the group plans to host a call for AI demos and a conference on data-centric machine learning.
39.- Eva Petersen presents the work of the clinical evaluation working group in developing a framework for clinical evaluation of AI.
40.- The framework encompasses design, analytical validation, clinical validation, and ongoing monitoring of AI models across their lifecycle.
41.- A global community of experts was convened to ensure the framework leaves no one behind.
42.- The framework was tested and made more practical through a checklist deployed in a point-of-care diagnostics project.
43.- Future work will determine if clinical evaluation remains a standalone workstream and address gaps like economic evaluation.
44.- Petersen also introduces an overview of the 24 topic groups as use cases to which the working group guidance applies.
45.- Johan Lundin presents the work of the AI@POC topic group on point-of-care diagnostics, especially for cervical cancer screening.
46.- Their method combines human experts and AI analysis of digitized microscopy samples to extend access to diagnostics.
47.- Cervical cancer deaths now exceed maternal deaths globally, with very low screening coverage in sub-Saharan Africa.
48.- The AI@POC method was implemented in Kenya and Tanzania, using minimal POC infrastructure to capture and upload images for remote analysis.
49.- High accuracy was achieved in detecting pre-cancerous lesions, enabling a 10x increase in diagnostic capacity per expert.
50.- A large 2000-woman validation study is underway. Cost-effectiveness studies and expansion to other sample types are planned.
51.- Henry Hoffmann presents the work of the symptom assessment topic group in enabling standardized benchmarking of AI symptom checkers.
52.- 22 companies collaborated to build a benchmarking platform to compare AI solutions across different ontologies and data aggregation levels.
53.- Test cases were developed and performance evaluated. Data quality, bias and subgroup analysis were key considerations.
54.- Large language models are expected to transform the field. Trusted benchmarking by a neutral entity is needed.
55.- Marios Obwanga presents the work of the topic group on outbreak detection.
56.- The group conducted a literature review and global survey to understand current capabilities and gaps.
57.- An outbreak detection benchmarking platform was developed to evaluate AI models based on the working group guidance.
58.- Methods to generate shareable synthetic data and compare algorithms across aggregated data sets were established.
59.- Alexandre Chiavegatto Filho presents the work applying AI to predict neonatal mortality risk in developing countries.
60.- Using WHO's five minimum perinatal indicators, machine learning models were trained on data from eight countries.
61.- The models performed well in predicting 90% of neonatal deaths from the 5% highest-risk pregnancies.
62.- This enables targeted interventions to have maximum impact with limited resources. Expansion to other countries is planned.
Knowledge Vault built byDavid Vivancos 2024