Prompt Engineering Lab Journal¶
A personal workspace to document 9 projects testing prompt effectiveness.
🧠 Objective¶
To design, test, and evaluate various prompt types to understand how clarity, tone, structure, and creativity affect AI-generated outputs.
📂 Structure¶
This lab journal contains 9 projects. Each project will include:
- Prompt Type & Goal – what kind of prompt is tested and why.
- Experiment Setup – tools, parameters, and method used.
- Prompt Versions – iterations of the prompt for comparison.
- Evaluation – clarity, accuracy, creativity, tone, and structure scores.
- Insights & Learnings – what worked, what didn’t, and takeaways.
⚙️ Tools & Workspace¶
- ChatGPT / Google Gemini / Claude – for generating and testing prompts.
- Google Sheets / Excel – for logging and analyzing evaluation scores.
- Canva – for visual representations and comparison charts.
- Python & MkDocs – for building and structuring this professional documentation site.
- GitHub Pages – for hosting and version control of the final portfolio.
📏 Evaluation Metrics¶
Metric | Description |
---|---|
Clarity | How clear and understandable the output is. |
Accuracy | How closely it meets the intended goal or fact-based correctness. |
Tone | Whether the response tone matches the context or purpose. |
Creativity | Level of originality or uniqueness in the response. |
Structure | How well-organized and formatted the output is. |
⚖️ Project Disclaimers and Notes¶
Evaluation Methodology¶
The scores (Clarity, Accuracy, Tone, etc.) found throughout this journal represent subjective human evaluation against objective criteria. They are not generated or assigned by any of the AI tools tested.
Coincidence Note¶
All tasks, product names, and content generated are entirely fictional and for educational purposes only. Any resemblance to real-world entities is purely coincidental.