Insights for Organisations

Can we harness AI’s potential while ensuring control?

Academy Services Team
23.03.2024 Published: 07.03.24, Modified: 23.03.2024 20:03:36

Is AI a threat to humanity?

This is the third question that shows up in an internet search result for ‘Is AI…’

Much has been written and said about Artificial Intelligence (AI) and conversations vary from the pertinent to the outrageous, as seen in the first example.

Yet, despite all concerns, mass adoption of AI is growing at a staggering rate with the global AI market set to reach $407 billion by 2027.

As with any new technology, organisations and end-users should be aware of the potential risks associated with its adoption. Research by IBM shows that human error causes 95% of cyber security breaches. So, advocacy for responsible use of tech must be a universal commitment for consumers and enterprises alike.

AI is no different.

FDM Group Director of Information Security, Patrick Wake believes –

Chat bots and Artificial intelligence are not actually intelligent and should only be seen as a handy tool to utilise when needed. This ‘intelligence’, whilst impressive, is nothing more than a mathematical algorithm based on large datasets. 

Like all tools, there is the right tool for the right job. The datasets that the AI uses are created by crawling the internet. With this approach it can contain bias, and misinformation and any information uploaded typically ends up within the data set and the public domain. 

The misinformation provided by AI has already left its mark. One example of this can be noted where two US lawyers were fined for submitting six fake court citations generated by Chat GPT in an aviation injury claim. This led to the law firm receiving a $5000 penalty fine and reputational damage. 

The risk of data loss with AI has also been seen with organisations like Samsung, where members of staff inadvertently leaked meeting notes and source code trying to utilise the Chat GPT platform. 

Whilst this is a technology is in its infancy and it’s going to get things wrong, it is up to the people that utilise these platforms to be educated of risks and to verify the data that is being provided: in short learning how to use your tools safely, like children learning to not run with scissors. 

Of course, none of this should imply that AI does not pose a threat to our daily lives and livelihood.

‘Like all great craftsmen, we just need learn how to best use the tools we have’

ChatGPT launched in November 2022 and in the first five days racked up a cool one million users. There are currently 100 million ChatGPT users worldwide and as of June 2023 the website had recorded 1.6 billion visits.

What are the main cybersecurity challenges in the age of generative AI?

As AI’s growth trajectory continues to surge upwards and more and more businesses start using it in their day-to-day, it’s important to consider the best practices for safe AI use.

Patrick Wake shares his five top tips –

  1. When using an AI or Chat bot do your research. Ask the following questions and make an educated choice based on the answers:

Knowing its limitations will help you set your expectations and help you get the most out of these tools

2. Do not upload or provide chat bots with sensitive or private information. The earlier examples data leaks are probably the best case in point. The company has since banned employees from using the Chat GPT platform.

3. Verify information before using it, While AI and chat bots can provide useful information, always double check from trusted sources. Don’t rely on AI-generated content for important decisions!

4. AI and chat bots can be manipulated with prompt injection, always ensure that you have strict input validation and sanitisation in place.

5. If you come across any instance of AI being used for unethical or malicious purposes, report it to the platform or organization responsible.

Growing concerns have led to talks of regulation around safe AI use. Whilst it’s still early days before actual legislation is passed, leading tech providers like Microsoft are providing their own guidelines for the public governance of AI.

Microsoft’s recommendations include a 5-point blueprint

  1. Implement and build upon new government-led AI safety frameworks
  2. Require effective safety brakes for AI systems that control critical infrastructure
  3. Develop a broad legal and regulatory framework based on the technology architecture for AI.
  4. Promote transparency and ensure academic and non-profit access to AI
  5. Pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology

Next Steps

As the scope and use of AI continues to grow and infiltrate different areas of personal and public life, and we as a society adapt and learn how best to respond to this dynamic phenomenon, governance is going to become top priority. Education about the correct and safe use of AI is the first step as legal and regulatory frameworks are formalised.

Insights

Insights for Organisations

Is your business ready for AI?

FDM Consultant Jonathan van Kuijk works in the Workplace Technology department for a retail client.

Find out more
Insights for Organisations

From data to action: strategies for tackling financial crime in the UK

The UK loses a staggering £8.3 billion each year to financial crime, the government's Economic Crime Survey (ECS) has revealed.

Alumni

FDM Alumni's fast track journey to TechSkills accreditation

Alice Watkins is an FDM Alumni working as a Business Analyst for a global banking client.