Select a Region North America

Seven Key Takeaways From a Week at AWS re:Invent 2023 

AWS re:Invent 2023 was a week full of exciting announcements, inspiring keynotes, and insightful sessions impacting all industries, including healthcare and life sciences (HCLS).

The event showcases the latest and greatest innovations and developments in cloud computing, especially in the field of Large Language Models (LLM) and generative artificial intelligence (AI). I had the opportunity to attend several sessions, workshops, and demos to hear from experts and peers in the cloud computing industry. Here are my top seven observations from re:Invent 2023 that will impact how we work today and tomorrow.

  1. LLMs, Image Generation Models and Support Are on the Rise

One of the exciting announcements at re:Invent 2023 was the release of new models in Amazon Bedrock. Bedrock allows customers to access and deploy a variety of LLMs, including Amazon’s own models like Titan as well as third-party models like Anthropic’s Claude 2.1, Meta’s Llama, and Cohere’s Command. Bedrock provides a unified interface and API for customers to easily integrate LLMs into their applications and workflows, and also offers tools and best practices for data preparation, model optimization and monitoring.

Bedrock supports both foundation models and custom models. Foundation models are pre-trained LLMs that can perform a range of natural language tasks, such as text generation, summarization, translation, question answering, and more. Custom models are LLMs that are fine-tuned or adapted to specific domains or use cases, such as legal, medical, or financial. Customers can choose from a catalog of foundation models, or build their own custom models using Amazon Bedrock’s data and model management features.

Amazon also showcased its own homegrown text generation models like Titan TextLite and Titan TextExpress, along with a new image generation model called Titan Image Generator. Titan TextLite is a lightweight and fast text generation model that can produce high-quality text with minimal latency and resource consumption. Titan TextExpress is a more powerful and expressive text generation model that can produce longer and more diverse text with more control and creativity. Titan Image Generator is an image generation model that can produce realistic images with natural language prompts and invisible watermarks for security.

It’s clear new models and technologies are here to stay, so it is vital we understand how they could be used for clients across the life sciences industry and beyond to reach patients.

  1. Multi-Modal Vector Embeddings and Search Tools Shine!

Another exciting feature of Amazon Bedrock is the ability to use multi-modal search and recommendation options within LLMs by using Titan Multi-model Embeddings, which can translate text and other files into numerical representations called vectors. Vectors are useful for measuring the similarity and relevance of different types of data, such as text, images, audio, video and more. By using Titan Multi-model Embeddings, customers can leverage the power of LLMs to perform cross-modal tasks, such as finding images that match a text description, generating captions for videos, or recommending products based on user preferences.

Titan Multi-Model Embeddings can also be used to create vector indexes, which are collections of vectors that can be searched and queried efficiently. Customers can use vector indexes to perform fast and accurate search and analytics on their data, and also to enable conversational interfaces and natural language understanding with LLMs.

Once again, lots of implications to our industry and clients.

  1. Retrieval-Augmented Generation Comes to Life

Another interesting feature of Amazon Bedrock is the ability to use retrieval-augmented generation (RAG) to search their own proprietary data stores with LLMs by using KnowledgeBase for Amazon Bedrock, which fetches relevant text or documents automatically. RAG is a technique that combines LLMs with external knowledge sources, such as databases, documents, or web pages, to generate more informative and accurate text. For example, customers can use RAG to generate product descriptions, FAQs, summaries, or reviews by retrieving relevant information from their own data sources and incorporating it into the generated text.

Amazon Bedrock also offers Model Evaluation on Amazon Bedrock, which allows customers to evaluate, compare, and select the best foundation model for their use cases. Model Evaluation on Amazon Bedrock provides metrics and feedback on the performance, quality, and suitability of different LLMs, such as accuracy, fluency, diversity, coherence, and more. Customers can use Model Evaluation on Amazon Bedrock to find the optimal trade-off between speed, cost, and quality for their LLM applications.

  1. New Gen AI Innovation Center Brings Custom Models to Industry

For customers who want to build custom models with expert help, Amazon also announced the launch of its Gen AI Innovation Center, which provides data science and strategy expertise and supports building around Anthropic’s Claude models. The Gen AI Innovation Center is a team of experienced and talented data scientists, engineers, and consultants, who can help customers design, develop, and deploy custom LLMs and generative AI solutions. The Gen AI Innovation Center also provides access to Anthropic’s Claude models, which are state-of-the-art LLMs that can generate high-quality and diverse text with minimal data and compute requirements.

To further accelerate the model training process, Amazon also launched Sagemaker Hyperpod, which can train foundation models on thousands of AI accelerators. Sagemaker Hyperpod is a distributed and scalable training platform that can reduce the model training time by up to 40 percent, and also lower the cost and complexity of training large-scale LLMs. Customers can use Sagemaker Hyperpod to train their own custom models, or fine-tune existing foundation models, using Amazon Bedrock’s data and model management features.

Key takeaway – as more brands want to explore custom options, AWS can now support it through Bedrock.

  1. Innovation in Integrating LLM and Generative AI Across Various Databases

Another major announcement at re:Invent 2023 was the integration of LLMs and generative AI with Amazon’s various databases, such as Aurora, Redshift, OpenSearch, DocumentDB, DynamoDB, and Neptune. Amazon broke down the silos between its different databases and allowed customers to easily leverage their data with LLMs and generative AI. Customers can now use LLMs and generative AI to perform tasks such as data ingestion, transformation, analysis, and visualization, across different types of data, such as relational, non-relational, graph and search.

Amazon also added vector search support to many of its databases, such as Aurora MySQL, OpenSearch, DocumentDB, DynamoDB, and DB for Redis, to enable faster and more relevant search and analytics. Customers can now use vectors to perform similarity and relevance search, as well as clustering and classification, on their data, and also to enable conversational interfaces and natural language understanding with LLMs and generative AI.

Additional data announcement was the launch of AWS Clean Rooms ML, which allows customers to share their data with third parties in so-called cleanrooms, and then run machine learning models on the data to get predictive insights. Cleanrooms are secure and isolated environments that prevent data leakage and preserve privacy and compliance. Customers can use cleanrooms to collaborate with other parties, such as partners, vendors, or regulators, and run machine learning models on their combined data, without exposing or transferring the raw data.

  1. AI Powered-Assistant Amazon Q Shines Bright

Finally, my last aha moment was with Amazon’s introduction of Amazon Q, an AI-powered assistant that can turn natural language prompts into customized recommendations for queries, data integration pipelines, and generative SQL in Amazon Redshift.

Amazon Q is a smart and conversational interface that can help customers access and analyze their data in Amazon Redshift, without writing any code. Customers can simply ask Amazon Q questions or requests in natural language, and Amazon Q will generate the best possible query, pipeline, or SQL code, and also execute it and return the results leveraging the power of LLMs and generative AI.


My list could be 50 key observations or more, and there are so many more innovative learnings coming out of re:Invent. For our clients and prospects in the life sciences and pharma industry, now more than ever, it is important that you know how these technologies could help you.

Yes, they are complex, and the power of innovation continues to grow every day. Which is why a trusted partner that can understand your challenges and bring innovative solutions and partners like AWS to the table can help.

The future is brighter than ever, and together we are reinventing what healthcare looks like tomorrow.

Author
Abid Rahman headshot
Abid Rahman
Senior Vice President, Innovation

Abid Rahman spearheads innovation at EVERSANA, boasting a track record spanning over 23 years in the technology field and over 19 years specifically in the pharmaceutical industry. With extensive knowledge to bring technical solutions…