The power of Responsible AI and why it can be a force for positive innovation to improve everyday lives in Scotland was a recurring theme at CGI’s flagship annual event at St Andrews.

The primary IT provider to Scottish local authorities and private businesses once more played host at the Old Course Hotel to more than 100 senior executives and officials for a day of discussion and presentations as well as exchanging ideas around the theme of innovation and entrepreneurship.

A host of key speakers showcased the many ways that businesses and organisations can ‘think outside the box’ in order to deliver real change for citizens and the environment in which they live and work.

Much of this thinking revolved around AI and especially in debunking the myths surrounding it.

To do that, CGI turned to its Global AI Research Lead, Dr Diane Gutiw to illuminate the audience. Diane posed three questions: should we be afraid of AI, who is using it, and achieving real value, and what is coming next.

Should we be afraid of AI?

AI is really just a tool, the parameters of which are set by humans. It’s not magic – it’s just maths. It has been around for decades and is only now getting attention as it has become more accessible to everyone.

Nowadays most organisations are predicting what is likely to happen in the future by analysing data from the past, and how we might use that data to positively impact on our behaviour.

AI is an automation of that process, which has the ability to simulate and replicate human reasoning in order to absorb, digest and produce information much more quickly than a human can. The more information we have at our fingertips, thanks to AI, the better insights we are able to gather – and in turn we can make better decisions.

But what if it is used in the wrong way? The big vendors currently leading the charge on AI – such as Microsoft Copilot, Google, AWS, the Turing Institute and also CGI – are taking scientific and academic approaches used in research for decades and are applying it to business AI. These include ethics, responsible use, privacy, security, reliability, transparency and statistical relevancy and accountability which are built into responsible use frameworks.

This ultimately leads to technology which has a clear purpose, and is developed not only to solve problems but come up with benefits and opportunities. So for every piece of ‘deep fake’ AI, you can develop a framework that creates AI which debunks it.

This not only transforms opinions on AI but allows it to be used in far more beneficial ways. For example, university professors can now use AI tools to look at the percentage of student papers written in generative AI. They are not telling their students ‘don’t use AI at all”. They are saying, embrace it as a tool to search for information, but be discerning – as I am checking you.

That is how AI can be implemented safely and can make a positive difference in all of our lives. Currently, it is inevitable. We will all have access to it, especially our children. But we can learn to use it in a way that can benefit us by being discerning. If we do, we can take the power of these tools and use them to provide maximum benefit to everyone.

So who is getting the value?

AI is not new. We have been working with machine learning right back to the 1950s. But now our machines have the power and speed to look at all different types of data, documents, narrative texts, excel spreadsheets, images and videos, and use these tools to analyse them – all the while mimicking our reasoning as they do so.

Coupled with this is the power that is in our computers, both quantum and in the cloud. They now have the ability to synthesise information much faster. Gone are the days where we used to have to leave a sign for the janitor saying ‘do not turn off this computer’ as it took days to process the information that the research needed.

Now organisations, both private and public sector, are able to gain real value and insights that inform policy based on AI analysing real-time data. The gains are remarkable, the achievements amazing.

For example, Western Canada’s Covid data response saw AI deliver amazing things. These included allowing the forecasting for vaccine rollout every night before the next morning ensuring no appointments needed to be cancelled, covering areas of higher risk quickly, minimising vaccine waste, and calculating the rollout strategy to open up businesses and society safely as quickly as possible.

It also forecast the changes in variants, the contact tracing, the amount of vaccine needed, types of vaccine, clinics available and clinic capacity. For the first time ever a public health crises was able to use purely evidence-based decisions on policy to ensure community wellness.

Through collaborative working among authorities and changes to previous data-sharing restrictions, the achievements were ground-breaking.

Another example was the opioid crisis in Canada. AI allowed researchers to look at patterns they had never seen before – answering questions such as at what point in the addiction cycle can health professionals really make a difference: the first hospital or GP visit, outpatient appointments, or second rehab. AI helped make previously never seen before discoveries.

AI also provides a brilliant tool to spot things that cannot be seen with the naked eye. CGI and Helsinki University Hospital developed an AI solution for reviewing brain CT scans and detecting early brain bleeds. This solution leverages AI to support the radiologist and detect early brain bleeds that are hard to detect with the human eye. In this example the radiologist completes their initial read and then is provided the AI driven analysis to support experts and help save lives.

That is the power of AI – but it is still just a tool. Where organisations get the value is through integrating it into their own workflow. AI is not solely a standalone tech issue. The benefit is how to integrate it into your organisations, where you will see the real benefits and return on investment. It is no longer about data scientists in the basement, it will be integrated into the workforce.

Also it provides an opportunity to prevent a loss of much-needed knowledge that exists among our soon-to-retire workforce. Figures suggest that as much as 40% of the population will retire in the next 10 years. If we find a way to extract that knowledge and store it safely to provide real insights for the next generation.

When we come to need that data, and extract it from our data stores and documents, we can look to build the services which help us live very comfortable lives, especially as we are also a generation that will need very specialised services as we age.

Where is it going next?

What we will see is more multi-model AI that will include video and generate graphics, and the accuracy level will get even higher. It will then be used across a number of areas so it can solve really big problems to better our lives. Its rapid evolution will be fast and incredible.

There will be models such as digital triplets which leverage generative AI to extend existing investments in data by using GenAI to further interrogate that data and findings, such as and providing explanations and alternate scenarios using natural language including alternative treatment protocols from diagnostics, the next best action for equipment failures and greater insights and multiple recommendations from existing dashboards.

And while we tend to look at the future shape of AI being all about robotics – the likes of the Google glasses and Apple goggles – in reality it will be tech that becomes embedded in our daily lives, for the betterment of our lives.

For example, it’s AI taking a 50-page white paper and in minutes generating a solid PowerPoint, which pulls in the right data and images, then tweaks it. This is how we will work and interact with AI every day.

We will also have discretion over when we use it. People might like to continue to write their documents and emails themselves, for full accountability of its content. However, others who may for instance not speak English as a first language can use it a lot for emails.

But it’s important to remember this analogy. Not very long in the future you may drive down the road and see a dog abandoned at the side of the road. It’s a robot dog. It’s not cared for and dirty. So you take it home, clean it, and love that dog as if it is your best friend. But the dog doesn’t love you, as it’s still just a robot made up of electronics and data.

We must not humanise tools like AI. They are just tools. Artificial general intelligence and sentient robotics do not yet exist, and are not likely to for a long time.

The future will see AI blend into everyday life just like the internet. We will almost forget that it is there. By creating responsible use frameworks, a foundation of guardrails, advancing and using AI in a responsible, ethical, reliable way, we are already seeing it solve really big problems.

That is the future we are likely to see.