π The Ultimate Memory Hooks for AWS Certified AI Practitioner (AIF-C01)
Preparing for the AWS Certified AI Practitioner (AIF-C01) can feel overwhelming β not because the concepts are complex, but because the exam covers a wide range of AI terminology, AWS services, ML workflows, prompt engineering, RAG, and evaluation metrics.
When I started preparing for the AWS Certified AI Practitioner (AIF-C01) exam, I quickly realized something β the content wasnβt βhard,β but there was so much to remember, and many terms sounded similar:
- Supervised vs Unsupervised
- Evaluation metrics
- SageMaker services
- Bedrock features
- Prompt engineering techniques
- RAG components
To keep things simple, I began writing down small memory hooks, short patterns, and mental shortcuts on a notepad.
These hooks helped me instantly recall concepts during the exam β especially when faced with confusingly worded scenario questions.
During my preparation, I followed the excellent QA/CloudAcademy course:βAWS Certified AI Practitioner (AIF-C01) Certification Preparationβ by Danny Jessee.
This course helped me understand how different services fit together, while my memory hooks helped me recall the details under exam pressure.
This blog post is a summary of all the memory hooks that helped me pass the exam β shared so you (and others) can benefit as well.
To make learning efficient, here is a single consolidated guide of the best memory hooks and mnemonics used to successfully pass the exam β now shared so others can benefit too.
π§ 1. Machine Learning Basics
Supervised vs Unsupervised
Labels β Supervised
No Labels β Unsupervised
β Supervised = Teacher + Correct Answers
β Unsupervised = Find patterns (clustering, segments)
Classification vs Regression
Classes β Classification
Numbers β Regression
Overfitting vs Underfitting
Overfitting = Too complex β Increase regularization
Underfitting = Too simple β Decrease regularization
π§ 2. Key Algorithms
Clustering
Group customers? No labels? β K-Means
Image Classification
Flower Classification β k-NN or Decision Tree
Anomaly Detection
No labels + abnormal detection β Autoencoders
π§ 3. GenAI Prompt Engineering
- Few-shot prompting
- Show format β Few-shot prompting.
- Prompt chaining
- Multi-step workflow β Prompt chaining.
- ReAct prompting
Reason + Action + Tool use β ReAct.
Temperature
Creativity β β Temperature β
Consistency β β Temperature β
π§ 4. LLM Inference Parameters
- Temperature: Creativity
- Top-K: Number of token choices
- Top-P: Probability bucket
- Max Tokens: Output length
- Frequency Penalty: Reduce repeated words
- Presence Penalty: Discourage repeated topics
Creativity β Temp / Top-K / Top-P
Length β Max Tokens
Repetition β Frequency & Presence
π§ 5. RAG (Retrieval-Augmented Generation)
Purpose of Chunking
Chunking = Better retrieval β Better context
Batch Steps in RAG
β Content embeddings
β Build search index
(NOT query embeddings or response generation)
LLM Type for Multimodal Search
Text + Image queries β Multimodal model
π§ 6. Evaluating ML Models
Summarization Metrics
Summarization β ROUGE
(If ROUGE missing β Choose BLEU)
Translation Metrics
Translation β BLEU / METEOR
Classification Metrics
Imbalanced data β F1 Score
Balanced β Accuracy
Regression Metrics
- Numeric prediction β MSE / RMSE
- LLM Quality
Perplexity β How surprised is the model?
π§ 7. AWS Services β Quick Memory Hooks
Model Cards
Governance + Documentation β Model Cards
Model Monitor
Detect drift in production β Model Monitor
Ground Truth
Human labeling β Ground Truth
JumpStart
Pre-built models + quick deploy β JumpStart
SageMaker Canvas
No-code data prep β Canvas
HealthScribe
Medical speech-to-text β HealthScribe
Guardrails for Bedrock
Responsible AI (safety filters) β Guardrails
PartyRock
Experiment + Learn + No cost β PartyRock
(Not for VPC, not for deployments)
π§ 8. GenAI Lifecycle
Design β Data β Train β Evaluate β Deploy β Monitor
Evaluation Stage
Accuracy testing
Safety + toxicity testing
Hallucination measurements
Inference
- Train = Learn
- Infer = Predict
- Deploy = Serve
π§ 9. Embeddings
Embeddings = Meaning β Vectors
Reduced dimension β Same meaning β Similarity search
π§ 10. Foundational Concepts
Fine-tuning
Teach big model a small task well.
β Domain-specific labeled data
β Improves specific task performance
β NOT retraining from scratch
β NOT updating model to recent events
Responsible AI
Safety + Filters + Detect toxicity β Use Guardrails
π Final Thoughts
These memory hooks are designed with one purpose:
π Make recall instant during the exam
π Reduce confusion between similar concepts
π Build confidence with patterns instead of memorising definitions