How These Google AI Essentials Practice Questions Are Organized
These 30 questions are grouped across the five modules of Google AI Essentials: Introduction to AI (Q1–Q6), Maximize Productivity With AI Tools (Q7–Q12), Discover the Art of Prompting (Q13–Q18), Use AI Responsibly (Q19–Q24), and Stay Ahead of the AI Curve (Q25–Q30). Each question is followed by the correct answer and an explanation of why it is correct—and why the common wrong answers are wrong. The course is accessed through grow.google/ai-essentials and requires passing each module's graded assessment at 80% or higher.
Module 1: Introduction to AI (Questions 1–6)
Q1. A spam filter that learns from thousands of labeled emails—"spam" or "not spam"—to classify future emails is an example of which type of machine learning?
Answer: Supervised learning. The filter is trained on labeled examples, where the correct output (spam/not spam) is known for every training example. Unsupervised learning finds patterns in unlabeled data. Reinforcement learning learns through reward signals from an environment, not labeled datasets.
Q2. A developer writes explicit if-then rules to check for fraud in financial transactions. An ML engineer trains a model on historical fraud data instead. What is the key difference?
Answer: The ML model learns patterns from data rather than following hand-coded rules. Traditional programming requires a human to define every rule explicitly. ML models infer rules from data—including patterns too complex or numerous for a human to enumerate manually.
Q3. An AI system trained on medical data from urban hospitals is deployed in rural clinics. The system performs worse in rural settings. What is the most likely cause?
Answer: The training data does not represent the population where the model is deployed (distributional shift). The model learned patterns from one population and is being applied to a different one. This is a representation failure in the training data, not a flaw in the algorithm itself.
Q4. Which of the following is an AI system most reliably able to do?
Answer: Identify patterns in large datasets faster than humans can manually. AI systems are most reliable at pattern recognition tasks—classification, prediction, recommendation—on data similar to what they were trained on. They are unreliable for tasks requiring genuine novel reasoning, emotional understanding, or operating in environments significantly different from their training data.
Q5. An AI chatbot confidently states an incorrect historical date as a fact. What term describes this behavior?
Answer: Hallucination. Hallucination refers to AI systems generating content that is factually incorrect but stated with apparent confidence. It results from the model predicting plausible-sounding text based on statistical patterns rather than retrieving verified facts from a database.
Q6. Which of the following best describes the difference between AI and machine learning?
Answer: Machine learning is a subset of AI that uses data and algorithms to learn without being explicitly programmed for every task. AI is the broader category—any system that performs tasks that would typically require human intelligence. ML is a specific approach within AI that achieves this by learning from data.
Module 2: Maximize Productivity With AI Tools (Questions 7–12)
Q7. A project manager uses an AI tool to summarize a 40-page project report into five bullet points for an executive briefing. Before sharing the summary, what should the project manager do?
Answer: Verify the summary against the original report to ensure accuracy. AI summarization tools can omit critical nuances, mischaracterize data, or produce inaccurate summaries even of straightforward documents. Sharing an AI-generated summary without review creates reputational and professional risk.
Q8. Which task is AI most likely to help a marketing professional complete more efficiently?
Answer: Drafting multiple variations of ad copy for A/B testing. Generating text variations across consistent parameters—same product, different tone or audience—is well-suited to current AI tools. Tasks requiring genuine creative judgment, strategic market insight, or real-time competitive analysis are less reliably assisted by AI tools at their current capability level.
Q9. A recruiter uses an AI tool to screen resumes. The tool works well for some candidate pools but performs inconsistently for others. What should the recruiter consider before scaling this process?
Answer: Whether the tool was trained on data representative of all candidate pools being evaluated. AI screening tools can encode historical hiring biases if trained on past hiring decisions. The recruiter should audit the tool's performance across demographic groups before using it as the primary screening mechanism.
Q10. An HR professional needs to draft 15 individualized performance review templates for different role levels. How can AI tools most effectively support this task?
Answer: By generating an initial draft template for each role level that the HR professional then reviews and customizes. AI tools are efficient at generating structured text frameworks. The professional's expertise is necessary for ensuring each template reflects accurate role expectations and company standards—AI output is a first draft, not a final product.
Q11. What is the primary risk of relying on AI-generated research summaries without verification?
Answer: The summaries may contain fabricated citations or inaccurate facts stated confidently. AI language models can generate plausible-sounding source references that do not exist. Any AI-generated research summary used in professional or academic work requires verification against the actual sources.
Q12. A sales team wants to use AI to personalize outreach emails at scale. What is the most important consideration before implementing this?
Answer: Ensuring the AI-generated emails are reviewed for accuracy and appropriateness before sending. Personalization at scale with AI can produce errors—incorrect names, wrong product references, tone mismatches—at scale. A review step prevents sending bulk incorrect or off-brand communications to prospects.
Module 3: Discover the Art of Prompting (Questions 13–18)
Q13. A prompt reads: "Translate the following sentence to French: 'The meeting is at 3pm.'" No example translation is provided. What prompting technique is this?
Answer: Zero-shot prompting. Zero-shot prompting gives the model a task with no examples. For common, well-defined tasks like translation, zero-shot works reliably. For tasks requiring a specific format or style, adding examples (few-shot prompting) typically improves output consistency.
Q14. A content writer provides the AI with two examples of their previous blog post introductions, then asks it to write an introduction in the same style. What technique is this?
Answer: Few-shot prompting. Few-shot prompting provides two or more examples before making the request. One example is one-shot prompting. Providing examples significantly improves output consistency when style, format, or tone needs to match a specific pattern.
Q15. A prompt reads: "Let's think step by step: A customer returns a product after 45 days. Our policy allows returns within 30 days. The customer says the product was defective from day one. Should we approve the return?" What prompting technique is this?
Answer: Chain-of-thought prompting. Chain-of-thought prompting instructs the model to reason through a problem step by step before producing a final answer. It is particularly effective for decisions involving multiple conditions, where a direct answer might skip reasoning steps that matter.