Skip to content

Prompt Engineering Lab Journal

A personal workspace to document 9 projects testing prompt effectiveness.

🧠 Objective

To design, test, and evaluate various prompt types to understand how clarity, tone, structure, and creativity affect AI-generated outputs.


📂 Structure

This lab journal contains 9 projects. Each project will include:

  1. Prompt Type & Goal – what kind of prompt is tested and why.
  2. Experiment Setup – tools, parameters, and method used.
  3. Prompt Versions – iterations of the prompt for comparison.
  4. Evaluation – clarity, accuracy, creativity, tone, and structure scores.
  5. Insights & Learnings – what worked, what didn’t, and takeaways.

⚙️ Tools & Workspace

  • ChatGPT / Google Gemini / Claude – for generating and testing prompts.
  • Google Sheets / Excel – for logging and analyzing evaluation scores.
  • Canva – for visual representations and comparison charts.
  • Python & MkDocs – for building and structuring this professional documentation site.
  • GitHub Pages – for hosting and version control of the final portfolio.

📏 Evaluation Metrics

Metric Description
Clarity How clear and understandable the output is.
Accuracy How closely it meets the intended goal or fact-based correctness.
Tone Whether the response tone matches the context or purpose.
Creativity Level of originality or uniqueness in the response.
Structure How well-organized and formatted the output is.

⚖️ Project Disclaimers and Notes

Evaluation Methodology

The scores (Clarity, Accuracy, Tone, etc.) found throughout this journal represent subjective human evaluation against objective criteria. They are not generated or assigned by any of the AI tools tested.

Coincidence Note

All tasks, product names, and content generated are entirely fictional and for educational purposes only. Any resemblance to real-world entities is purely coincidental.