Insights for Organisations

Unlocking business potential with generative AI: Key takeaways from AWS Summit 2024

Alliances Team
26.04.2024 Published: 26.04.24, Modified: 26.04.2024 15:04:39

Accelerating AI adoption can unlock £520 bn opportunity for the UK economy by 2030

From start-ups to enterprises, organisations of all sizes are looking to reinvent themselves with generative AI. They recognise its potential to revolutionise their operations, driving innovation and boosting productivity. However, despite the advancements in AI, a critical skills gap persists, underscoring the importance of training and development initiatives to bridge this divide.

At the recently held AWS Summit, industry leaders gathered to explore the transformative potential of Generative AI and its implications for businesses across various sectors. Here, we delve into the key takeaways from the event and how businesses can harness the power of Generative AI to unlock their full potential.

Missed the AWS Summit? Contact us to learn more about our services: https://bit.ly/3QdZBKt

Closing the digital skills gap

One of the significant challenges faced by businesses today is the digital skills gap. With the rapid advancement of technology, there is a growing demand for professionals skilled in emerging fields such as AI. Gen AI can increase an employee’s productivity by 48% by decreasing repetitive tasks so they can focus on creative thinking.

AWS is investing heavily into extensive training initiatives and strategic partnerships. By collaborating with organisations like FDM, they are empowering businesses with both technology and digital skills expertise.

Starting your generative AI journey

Embarking on a Generative AI journey can be daunting, but it doesn’t have to be. The key is to identify the right use cases that align with your business objectives. Whether it’s enhancing customer experiences, optimising business processes, or driving innovation, Generative AI offers endless possibilities.

  1. Select the right use case
  2. Empower your teams through a variety of training opportunities
  3. Get started on a Proof of Concept (PoC) for your top use cases

Building a data foundation for success

The thing that underpins all these Gen AI applications are the foundation models (FM), which are trained on massive volumes of data. Organisations using FMs should customise them according to their specific use cases to maximise efficiency.

Whether you are building your own model or customising a foundation model, your data is a key differentiator for generative AI, so you need a data strategy that supports relevant, high-quality data.

Design for privacy, security and transparency

  • Run and operate in your Virtual private cloud (VPC)
  • Make sure your data never leaves your VPC
  • Your data, your prompts, your customised LLM
  • Clearly outline what, why and how your data will be used
  • Establish data privacy guidelines
  • Conduct internal-only POCs in parallel to data privacy guidelines.

Neither humans nor LLMs can read minds. Be clear and specific.

Prompt engineering is the process of guiding large language models (LLMs) to produce desired outputs. What are prompt engineering best practices and how do you choose the most appropriate formats, phrases, words, and symbols to get the most out of generative AI solutions while improving accuracy and performance?

  • Instructions
  • Specificity, clarity and persuasiveness

 Not all elements are necessary to every prompt. But it’s best to err on the side of more elements to start and then refine and subtract elements for efficiency after your prompt already works well.

  • Many professions are prompt engineering for humans – marketing, education, tech writing, law, web design
  • Experimentation and iteration are key to prompts

How to engineer a good prompt

  • Develop test cases
  • Engineer preliminary prompt
  • Test prompt against cases
  • Refine prompt
  • Test against held-out evals
  • Share polished prompt

Prompt engineering influences the model output through the context the prompt provides.

When you’re working on a use case, especially building a chat interface, it’s best practice to set up a persona that can really influence the model to respond in such a way that represents the persona you set. The same question positioned for different personas will generate very different answers.

Example – Using Claude 3 LLM as an example, if we asked for an explanation of quantum physics aimed at a science professor, versus a six-year old kindergartener, we would get two widely different outputs.

Scaling and enhancing quality

As you scale your Generative AI applications, human expertise remains critical. Designing fast feedback loops and incorporating human-in-the-loop mechanisms from the outset is essential for maintaining quality and driving continuous improvement. By including knowledge experts in the design process and establishing clear feedback mechanisms, you can ensure that your Generative AI applications deliver value to your business and your customers.

AI the right way

With great power comes great responsibility. As businesses embrace Generative AI, it’s essential to consider the ethical implications and ensure responsible deployment.

Amazon has invested 4 billion dollars in Anthropic for building secure generative AI like Claude models.

Responsible AI considerations

  • Controllability – having mechanisms to monitor and steer AI system behaviour
  • Privacy and security – appropriately obtaining, using and protecting data and models
  • Safety – preventing harmful system output and misuse
  • Fairness – considering impacts on different groups of stakeholders
  • Veracity and robustness – achieving correct system outputs, even with unexpected or adversarial inputs
  • Explainability – understanding and evaluating system outputs
  • Transparency – enabling stakeholders to make informed choices about their engagement with an AI system
  • Governance – incorporating best practices into the AI supply chain including providers and deployers

Deploying AI securely also means considering potential misuse – like bias that exists in these systems but also the generation of harmful outputs like deepfakes. To mitigate these risks we need multi stakeholder collaboration between policy holders, tech companies, developers and more. Putting guardrails on your gen Ai applications is another way to mitigate the risk of harmful output.

Different types of guardrails can be added to FMs.

Example – a bank wants to have a chatbot function for customers but doesn’t want the chatbot giving investment advice. It can then add relevant guardrails that prevent the bot from being able to generate these responses.

Using Claude 3 LLM as an example, here are prompt engineering techniques for similar models to handle hallucinations and other troubleshooting:

  • Give Claude permission to say I don’t know if it doesn’t know
  • Tell Claude to answer only if it is very confident in its response
  • Have Claude think before answering
  • Ask Claude to find relevant quotes from long documents then answer using the quotes

Prompt injections and bad user behaviour

Claude is naturally highly resistant to prompt injection and bad user behaviour due to Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI

For maximum protection:

  1. Run a ‘harmlessness screen’ query to evaluate the appropriateness of the user’s input
  2. If a harmful prompt is detected, block the query’s response.

Unlocking business potential with generative AI

The AWS Summit highlighted the transformative potential of Generative AI and its implications for businesses across various industries. By closing the digital skills gap, empowering businesses with technology and expertise, and promoting responsible AI deployment, AWS is paving the way for businesses to unlock their full potential with Generative AI. As businesses embark on their Generative AI journey, the key lies in leveraging technology to drive meaningful outcomes and create value for customers and society alike.

Insights

Insights for Organisations

Is your business ready for AI?

FDM Consultant Jonathan van Kuijk works in the Workplace Technology department for a retail client.

Find out more
Insights for Organisations

From data to action: strategies for tackling financial crime in the UK

The UK loses a staggering £8.3 billion each year to financial crime, the government's Economic Crime Survey (ECS) has revealed.

Alumni

FDM Alumni's fast track journey to TechSkills accreditation

Alice Watkins is an FDM Alumni working as a Business Analyst for a global banking client.