Skip to content

Mini Project

✍️ Prompt Engineering


Overview

Prompt Engineering is the process of designing and structuring inputs (prompts) to effectively communicate tasks and intentions to a Large Language Model (LLM).
In this project, I explored how carefully crafted prompts can guide the model’s reasoning, improve consistency, and reduce ambiguity during web automation and information retrieval.

Objective

The main objective of this mini-project was to understand how prompt design affects the output quality of LLMs, and how it can be applied to SERA for better task planning and automation.

What I Explored

  • Prompt Structuring: Experimented with different prompt templates and formats (e.g., instruction-based, role-based, and chain-of-thought prompting).
  • Context Inclusion: Tested how including contextual background or previous user interactions improved output accuracy.
  • Prompt Optimization: Modified wording, order, and examples to reduce model hallucinations.
  • Dynamic Prompting: Integrated variables (like user goals and context) into prompts dynamically for adaptive responses.