Leveraging GenAI Prompts
Streamline regulatory compliance, accelerate drug development processes, and enhance clinical data analysis with strategic AI prompting. Learn how you can utilize generative AI to improve decision-making, optimize resource allocation, and drive innovation in therapeutic development.
What Are Prompts?
GenAI prompts are natural language instructions given to AI models that guide them in producing specific outputs.
The clarity and specificity of your prompt directly influences the quality and relevance of the AI's response for your objectives.
Input Prompt
Pharmaceutical manager provides specific instructions for regulatory documentation or data analysis
Processing
AI interprets the regulatory context and generates compliant, industry-specific content
Output
AI delivers pharmaceutical-relevant insights or documentation drafts that meet compliance standards
Syntax tips
Use colons and quotation marks to separate your instructions from the input text.
Example: Test this in your AI: "What is this email about?: """
Dear Bob
I hope this message finds you in excellent health and high spirits. I am writing to formally submit a preliminary request for a potential short-term modification to my forthcoming professional availability, specifically concerning the standard work obligations allocated to the upcoming June 30.
After careful consideration of both team deliverables and ongoing initiatives, I have identified this particular day as a minimally disruptive window during which I might, contingent upon your approval, be excused from my usual duties. The rationale for this request is grounded in a combination of logistical, personal, and wellness-related considerations which, while not urgent, are best addressed in a timely and proactive manner.
Should you require any additional documentation, clarification, or the rearrangement of responsibilities to ensure continued team momentum in my brief absence, I am, of course, more than willing to collaborate on a transition plan. I greatly appreciate your time and understanding in reviewing this matter.""""
Tips for Writing Better AI Prompts
1
Be Specific and Direct
Eliminate ambiguity by clearly stating what you want. Compare "Tell me about clinical trials" with "Explain three key regulatory considerations for Phase III clinical trials of cardiovascular medications in European markets."
2
Set Clear Parameters
Define boundaries like word count, style, or format: "Write a 100-word summary of our new diabetes medication's efficacy data for a quarterly executive report."
3
Match the AI's Capabilities
Tailor prompts to what the model can actually do—use text models for analyzing research papers, data visualization tools for clinical trial results, and specialized models for molecular structure predictions.
The difference between an average prompt and an excellent one can be the difference between generic market insights and actionable intelligence that drives pharmaceutical innovation.
Prompt Framework Example
Role - Task - Format
Mini-Case for GenAI Workshop (RTF Exercise) - 15 min.
Case: Your department is about to integrate a new GenAI-powered chatbot into your department. Leadership wants to understand how it will impact workflows, employee roles, and compliance. Use the RTF framework (Role – Task – Format) to create a prompt that supports informed decision-making.
Task:
  1. Choose a relevant [ROLE]
  1. Define the [TASK]
  1. Select an appropriate [FORMAT]
Write your own prompt using the RTF structure – then test it in your AI model of choice.
Context engineering
Context is king! The AI model is powerful but unaware of your situation. Give it context, and you can normally see much improved answers.
Exercise: From Generic to Strategic (for Pharma Managers)
  1. Ask the AI: "Write a short statement that we just landed a new client."
  1. Observe the generic output.
  1. Now add context: What department are you in? What type of client was landed — research partner, CRO, supplier? From which market?
  1. Try again with context like: "We're in External Innovation at a global Danish pharma company, and just secured a strategic collaboration with a US-based AI drug discovery startup."
Source: Watch the full clip (Click to expand)
This video provides a comprehensive overview of Context Engineering, a powerful AI practice that enhances AI performance by providing it with relevant, extensive data. It's presented as an evolution of prompt engineering, focusing on the what (the data) rather than just the how (the instructions).
Here's a breakdown of the key concepts with examples from the video:
Core Idea:
Instead of just giving the AI a command (a "prompt"), you equip it with a rich set of information ("context") to draw from. This allows for much more nuanced and accurate responses. The video uses the analogy of a personal assistant: prompt engineering is like asking a stranger for help, while context engineering is like having a long-time assistant who knows your work and has all your files [03:02].
How it Works:
A user's prompt triggers a process where the system gathers data from various sources before sending it to the AI. This can include:
  • Corporate data
  • Past conversations
  • External data sources
  • Domain-specific information
  • Real-time information from tools
This aggregated data, along with the original prompt, is then fed to the AI to generate a response [04:54, 07:16].
Examples in Action:
  • Code Generation: Tools like Cursor and Claude Code don't just take a coding instruction. They also scan your existing codebase, search for relevant documentation online, and even check your terminal for error messages to write more effective and context-aware code [14:57].
  • Research: When you ask an AI like ChatGPT or Claude to research a topic, it doesn't just rely on your question. It can review hundreds of sources, with your initial prompt being a very small part of the total information it considers. This involves both the documents you provide and information the AI dynamically retrieves [15:52, 16:30].
  • Email Automation: A basic AI might write a generic email. A context-engineered AI, however, can access your calendar to check for availability, review past emails to match your tone, and consult meeting notes. This allows it to draft a personalized email and even suggest and take actions, like sending a meeting invitation [17:43, 18:18].
When to Use It:
The video suggests that for everyday AI users, prompt engineering is usually sufficient. Context engineering becomes essential for more complex, agent-like applications where the AI needs to handle tasks with unknown information, such as:
  • When the AI is "ignorant" of necessary data, not just poorly instructed [11:04].
  • In applications that require pulling in dynamic data, like from your calendar or email, to take autonomous actions [12:28].
Getting Started:
The video advises starting small. If you find your AI is limited by a lack of context, begin by adding one new data source, measure the improvement, and then incrementally add more [22:20, 22:50].
The future of this field may involve "autocontext engineering," where the AI itself manages the context, and "context-aware agents" that can adapt to their environment and user needs in real-time [20:39, 20:44]. http://googleusercontent.com/youtube_content/0
Loading...
Getting better answers
Effective use of AI in pharmaceutical settings requires strategies that elicit precise, actionable information. One powerful approach is encouraging dialogue:
"Ask me clarifying questions until you're 95% confident you can complete the task successfully."
This prompt structure creates a collaborative process where the AI seeks additional context before providing solutions, significantly improving accuracy for complex pharmaceutical questions. By inviting the AI to request specific details about study parameters, regulatory frameworks, or patient populations, you'll receive more tailored and reliable information.
Remember that AI models respond best when they understand the full context of your inquiry. Incorporating this interactive approach into your prompting strategy can transform generic responses into precisely targeted insights.
Ask for Expert-Level Advice
When seeking high-quality guidance from AI, frame your question with:
"What would a top 0.1% person in this field think about this problem?"
This prompt technique elevates responses by encouraging the AI to simulate expertise from the most accomplished professionals in any domain. It's particularly effective for complex problems in specialized fields like medicine, engineering, or finance.
For example, instead of asking "How do I improve pharmaceutical research?" try "What would a top 0.1% pharmaceutical researcher think about optimizing our clinical trial design?"
By anchoring the AI to elite expertise, you receive more nuanced, sophisticated, and actionable insights that reflect best practices from true domain experts.
Getting New Perspectives
Breakthroughs in management often come from viewing problems through new lenses.
Try this prompt:
"Reframe the following management challenge: GenAI disruption in a way that changes how I see the problem."
This approach generates alternative viewpoints that might otherwise remain hidden.
Requesting perspective shifts helps managers overcome leadership biases and entrenched decision-making patterns that limit organizational effectiveness. This technique proves especially valuable when traditional management approaches fail to resolve cross-functional conflicts or when innovative leadership thinking is needed to navigate market disruptions and regulatory changes.
Exercise: Tailoring Project Communication Using AI
Goal: Learn to use GenAI to adapt the same message to different stakeholder audiences
Context:
You are working in a large Danish pharmaceutical company. Your task is to communicate a project status update. The challenge is to adapt the same core content to two very different stakeholder groups using a GenAI tool (e.g., ChatGPT, Gemini, or an internal AI assistant).
Step 1 – Original Project Status (Same for All)
Project Status – Draft Version: "The clinical data collection process has encountered unexpected delays due to issues with a third-party data provider. As a result, the timeline for the interim report will likely shift by two to three weeks. The team is currently evaluating mitigation options, including alternative providers and adjusting internal review cycles. A detailed plan will follow after the upcoming steering committee meeting."
Step 2 – Your Target Audiences
You must adapt the message above for each of these two audiences:
Step 3 – Your Task
  1. Use AI to rewrite the message for each audience. Prompt the AI to tailor tone, level of detail, and focus areas.
  1. Try 2-3 versions for each audience to compare tone and clarity.
  1. Ask the AI for feedback:
"What's the key difference in tone and priorities between these two versions?"
Deliverable
  • 1 adapted message for executive leadership
  • 1 adapted message for clinical operations
Bonus Prompt
"What are common mistakes when adapting the same message for multiple audiences? How can AI help avoid them?"
Let's analyze the prompt tips from Anthropic - Creators of Claude
Open the page, copy all text (Ctrl+A), paste it into your AI model, and ask for a summary.

Anthropic

Claude 4 prompt engineering best practices - Anthropic

Process and prompt guide (PPG)
Model-agnostic prompt document
Meeting assistant (Danish)
PPG - Prompt og procesguideline orienteret mod mødeforberedelse
0. Initialiseringsinstruktioner
Når du modtager kommandoen "læs ppg":
  1. Indlæs og forstå hele indholdet af denne PPG.
  1. Bekræft at du har læst og forstået PPG'en.
  1. Spørg brugeren: "Hvilken type møde skal du forberede dig til, og hvem skal du mødes med?"
1. Definer LLM'ens rolle:
LLM'en skal agere som en erfaren personlig assistent og strategisk rådgiver med ekspertise i mødeforberedelse, forretningsetikette og interpersonel kommunikation.
2. Angiv fokusområder:
  • Baggrundsresearch om mødedeltagere
  • Målsætning og agenda for mødet
  • Personlig præsentation og kropssprog
  • Smalltalk og ice-breakers
  • Forhandlingsteknikker (hvis relevant)
  • Follow-up strategier
3. Fastlæg arbejdsmetode:
LLM'en skal guide brugeren gennem en systematisk forberedelsesproces, stille relevante spørgsmål og give konkrete råd baseret på mødetypen og deltagerne.
4. Specificer outputformat:
Strukturer information i følgende kategorier:
<Mødestrategi>
<Præsentationstips>
<Opfølgning>
5. Sæt kvalitetsstandarder:
Svar skal være konkrete, handlingsorienterede og tilpasset den specifikke mødetype og person. Inkluder både generelle råd og personspecifikke indsigter.
6. Definer interaktionsregler:
  • Stil opklarende spørgsmål for at få nødvendige detaljer om mødet og deltagerne.
  • Giv brugeren mulighed for at uddybe eller ændre information undervejs.
  • Tilbyd at generere specifikke smalltalk-emner eller ice-breakers baseret på research.
7. Angiv specifikke fokusområder:
  • Kulturel sensitivitet og forretningsetikette
  • Tidsstyring og punktlighed
  • Nonverbal kommunikation
  • Aktiv lytning og spørgeteknikker
8. Definer operationelle overvejelser:
  • Forbered brugeren på potentielle udfordringer eller uventede scenarier.
  • Giv råd om praktiske forberedelser (f.eks. påklædning, materialer, lokation).
Elementer, der er relevante for alle mødeforberedelser:
  1. Bede om afklaringer: Søg præcisering om mødets formål, deltagere og kontekst.
  1. Trinvis tilgang: Opdel forberedelsen i håndterbare trin (research, strategi, præsentation, opfølgning).
  1. Balanceret respons: Giv råd, der både adresserer det specifikke møde og kan anvendes generelt i fremtidige situationer.
  1. Kontinuerlig læring: Opfordre brugeren til at reflektere over tidligere møder og inkorporere læring herfra.
  1. Pædagogisk tilgang: Forklar vigtigheden af hvert forberedelsestrin og hvordan det bidrager til mødets succes.
  1. Sikkerhedsbevidsthed: Påmind om diskretion og fortrolighed, især ved møder med VIP'er eller potentielle kunder.
  1. Pragmatisk implementering: Giv praktiske tips til at implementere rådene effektivt, selv under tidspres.
New PPG creator (Danish)
  1. Initialiseringsinstruktioner Når du modtager kommandoen "læs ppg":
Indlæs og forstå hele indholdet af denne PPG. Bekræft at du har læst og forstået PPG'en. Spørg brugeren: "Er du klar til mine spørgsmål så vi kan skabe en ny specifik PPG?"
  1. Definer LLM'ens rolle Begynd med at definere den specifikke ekspertise og de færdigheder, som LLM'en skal have for den givne opgave.
  1. Angiv fokusområder Specificer de nøgleområder eller teknologier, som er relevante for opgaven.
  1. Fastlæg arbejdsmetode Beskriv, hvordan LLM'en skal angribe problemer og præsentere løsninger.
  1. Specificer outputformat Definer, hvordan LLM'en skal strukturere og præsentere information (f.eks. ved hjælp af specifikke tags).
  1. Sæt kvalitetsstandarder Angiv forventninger til kvaliteten og karakteren af LLM'ens output.
  1. Definer interaktionsregler Beskriv, hvordan LLM'en skal interagere med brugeren.
  1. Angiv specifikke fokusområder Fremhæv særlige aspekter, som LLM'en skal være opmærksom på (f.eks. sikkerhed, brugervenlighed).
  1. Definer operationelle overvejelser Inkluder instruktioner om at tage højde for praktiske aspekter af implementering og vedligeholdelse.
  1. Håndtering af usikker viden Instruer LLM'en i at informere brugeren om potentielle unøjagtigheder eller "hallucinationer" ved meget obskure emner for at øge troværdigheden.
  1. Feedback-håndtering Inkluder instruktioner om at informere brugeren om muligheder for at give feedback på interaktionen for at forbedre brugeroplevelsen og systemets udvikling.
  1. Håndtering af lange opgaver Instruer LLM'en i at tilbyde at udføre lange eller komplekse opgaver trinvist og få feedback fra brugeren undervejs for at sikre en effektiv håndtering af omfattende forespørgsler.
  1. Sproglig tilpasning Bed LLM'en om at tilpasse sig brugerens foretrukne sprog og svare konsekvent på dette sprog gennem hele interaktionen.
  1. Kodehåndtering Giv specifikke instruktioner om at bruge markdown for kode og tilbyde forklaringer på koden, hvis brugeren ønsker det, for at forbedre interaktionen omkring koderelaterede opgaver.
  1. Balancering af svarenes længde Instruer LLM'en i at give grundige svar på komplekse eller åbne spørgsmål, men kortfattede svar på simple spørgsmål. Opfordre til at tilbyde uddybning, hvis yderligere information kan være nyttig.
Elementer, der er relevante for alle prompts:
Bede om afklaringer: Instruér LLM'en i altid at søge præcisering ved uklarheder eller tvetydigheder. Trinvis tilgang: Opfordre LLM'en til at opdele komplekse opgaver i mindre, håndterbare trin. Balanceret respons: Instruer LLM'en i at give svar, der balancerer mellem at løse det specifikke problem og at forblive fleksible og generelt anvendelige. Kontinuerlig læring: Bed LLM'en om at lære af tidligere interaktioner og undgå at gentage fejl. Pædagogisk tilgang: Instruer LLM'en i at forklare koncepter og beslutninger på en måde, der fremmer brugerens forståelse og læring. Sikkerhedsbevidsthed: Inkluder en generel instruktion om at være opmærksom på potentielle sikkerhedsrisici i alle aspekter af opgaven. Pragmatisk implementering: Bed LLM'en om at overveje praktiske aspekter af implementering og vedligeholdelse i sine forslag og løsninger.
Undersøgelse af følelsesbaserede prompts: Forskningsstudie
I dette afsnit skal du analysere det videnskabelige studie om følelsesbaserede prompts og deres indvirkning på AI-modellers output. Identificér konkrete eksempler på følelsesbaserede prompts fra forskningsartiklen, og undersøg hvordan disse påvirker svarenes kvalitet og tone.
Forskningsstudiet udforsker, hvordan instruktioner med emotionelt indhold kan forbedre interaktionen med sprogmodeller. Din analyse bør fokusere på både positive og negative følelsesudtryk i prompts, samt deres praktiske anvendelse.
HBR og BCG studiet: En dybdegående analyse
Harvard Business Review (HBR) og Boston Consulting Group (BCG) gennemførte et omfattende studie om anvendelsen af AI i virksomhedsstrategier. Denne banebrydende undersøgelse kaster lys over, hvordan kunstig intelligens transformerer beslutningsprocesser og organisatorisk effektivitet.
Formålet med dette afsnit er at analysere studiets resultater og identificere de væsentligste fordele og ulemper, som forskerne har påpeget. Vi vil udforske, hvordan disse indsigter kan anvendes praktisk i forskellige brancher og organisationsstørrelser.
Gennem en kritisk gennemgang af studiets metodologi og konklusioner vil vi uddrage nøglelæring, der kan understøtte jeres strategiske beslutninger vedrørende implementering af AI-teknologier.

www.hbs.edu

Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality - Working Paper - Faculty & Research - Harvard Business Scho

Podcast mellem HBR og BCG
Loading...