I was optimistic a year ago that AI would aid us considerably in putting an end to the pandemic. As an international community, however, we are still struggling to stem the spread of Covid-19 with all of its evolving variants, the latest named Omicron. When the pandemic started, hundreds of AI projects were announced around the world; from infection-tracking systems to technologies that claimed to reverse engineer the code of the virus, to means that accelerated the speed of vaccine discovery, to processes that quickly diagnosed Covid-19 from medical images. I often get asked where all the promise of AI went. In some people’s minds, the hype around AI remains unfulfilled or exaggerated. In fact, according to some estimates, around 85 per cent of AI projects will fail. However, it is unfair to gauge the success of AI based on a single project’s failure or success. On the contrary, we should measure success based on how big of an impact AI had or will have on certain domains and on a positive global effect on the pandemic. I recall seeing hundreds of face mask detection projects when the pandemic started. While some have tried to spinoff startups from the technology, these startups tried to capitalise on the need for mask enforcement during the pandemic. None of them are in use right now. It turned out that this wasn't as big of a deal as many imagined and policies and law handled it much more effectively. Another popular application was diagnosing Covid-19 from medical images. Initially, many thought that medical staff would not be able to cope with the huge number of tests requested by people, considering we had millions of infections worldwide. Enter deep learning tools that could detect positive cases in seconds from x-rays and MRIs. In theory, these were the perfect solution. However, it turned out that taking imaging devices like MRI machines into the field was impractical. Setting up a mobile lab to collect swabs made much more sense. Furthermore, these imaging systems could not be used in hospitals. They needed further clinical studies and scrutiny by regulatory authorities to ensure they were safe for medical use – tests that not many ended up passing. Thus, only a select few of these diagnostic tools are actually still in use. AI is tied to available data. Unfortunately, the ability to access relevant, real-time data is extremely privileged. Only the top tech company superpowers and governments have access to granulated data. While the tech companies have a good handle on the data framework – due to heavily monetising it – governments still struggle to build such a framework. In fact, several governments around the world have struggled to collect useful data because of the extremely high level of expertise needed. Add to that the privacy issues, and there is a definite challenge on your hands. The existing data monopoly often strangles smaller companies and startups and pushes them to work on less pressing problems, such as face mask detection. The solution to this is very complex, due to data sharing frameworks and the nature of personal data. One popular approach has been to share small datasets with researchers to test and develop their systems. While that can get the ball rolling, it is extremely difficult to do it in a meaningful way. In most cases, such efforts only end up being useful for student course projects or to test a hypothesis. The reality is that building effective AI tools requires constant development, monitoring and on flowing data which can only be achieved with tight integration. Static datasets just do not suffice in the real world. Let us also look at the example of Tesla, one of the leaders in autonomous driving – the concept of driver-less cars has gripped many imaginations over the years. In pursuit of making this goal a reality, Tesla is effectively crowdsourcing data from its enormous fleet of cars to its cloud and data centres. The amount of engineering work, expertise and massive infrastructure developed to cope with incoming data is an example of the huge effort needed to solve these challenges. To put a dent in a global healthcare challenge like Covid-19 requires much more effort than a quick-win mentality. The time, effort and resources needed is not something that a single entity – even tech giants – can handle. What is needed is government support, funding and the integrated effort of a top scientists and their teams. If we want effective AI solutions, then we must define a framework for researchers, governments and the private sector to get access to relevant data when needed. The first steps towards addressing these challenges are already under way in the form of analysis systems that preserve privacy and are secure, but much remains to be done still to make them practical. Finally, isolation is the biggest impediment to AI success, by which I mean the disconnect between academic institutions and governments. If you have attended any meeting where researchers and officials are trying to engage with one another, you will immediately see that these teams tend to speak different languages. They simply do not understand one another. Or in many cases, their interests, the way they perceive the challenges, and their thoughts on how to proceed don't align. While they can both do just fine in their own bubbles, neither will be able to enjoy any significant impact or transformation. It is time to re-engage all parties to drive this worthwhile effort in a more organised and consistent way.