Skip to Content

Lesson 1.4: Prompt Engineering for Educators


Prompt Engineering for Educators

 Prompt engineering is the practice of crafting effective inputs to get optimal outputs from Large Language Models. Since LLMs respond based on the patterns they've learned, how you phrase your request significantly impacts the quality and relevance of the response. Effective prompts typically include several elements: context about the situation, specific instructions about what you want, constraints or requirements, and desired format. For educators, mastering prompt engineering is like learning a new teaching skill—it becomes more intuitive with practice but follows certain principles. Good prompts are specific rather than vague, provide relevant context, specify the audience or purpose, and clearly state any constraints. The difference between asking "Make a lesson plan" and "Create a forty-five minute lesson plan for Year 9 students introducing the Pythagorean theorem, including a hands-on activity, formative assessment, and differentiation strategies for students who struggle with abstract concepts" is substantial in terms of output quality.

Examples: Consider these contrasting prompts: Weak prompt: "Help me teach fractions." This gives the LLM almost no information to work with. Strong prompt: "I'm teaching equivalent fractions to Year 4 students who understand basic fractions but struggle with the concept that different fractions can represent the same amount. Create three concrete, hands-on activities using materials available in a typical classroom, with each activity taking about ten minutes. Include guiding questions I can ask to check understanding." The strong prompt specifies the year level, identifies the specific concept and common misconception, requests a particular type of activity, sets time constraints, and asks for assessment questions. Teachers in Australian schools have found that including specific references to Australian Curriculum content descriptors in prompts produces more relevant and aligned materials.

Case Study: Victoria's Department of Education developed professional learning modules teaching educators prompt engineering specifically for curriculum planning. Teachers learned to break complex tasks into steps, use "chain-of-thought" prompting where they ask the LLM to explain its reasoning, and iterate on outputs. One particularly effective strategy was teaching teachers to adopt a "co-creation" mindset: use the LLM to generate a first draft, critically evaluate and modify it, then use the LLM again to refine specific elements. A case study following a group of teachers over one term found that those who completed the prompt engineering training saved an average of three hours per week on planning tasks and reported that the quality of their differentiated materials improved because they could efficiently create multiple versions of activities for different learning needs.

Questions:

  1. What is prompt engineering?

    • a) Building computer hardware
    • b) The practice of crafting effective inputs to optimize LLM outputs
    • c) Programming neural networks
    • d) Designing engineering courses
    • Answer: b
  2. Which element is NOT typically part of an effective prompt?

    • a) Context about the situation
    • b) Specific instructions
    • c) Deliberately vague language
    • d) Desired format or constraints
    • Answer: c
  3. What makes a strong educational prompt compared to a weak one?

    • a) Using more technical jargon
    • b) Keeping it as short as possible
    • c) Including specific context, audience, constraints, and desired outcomes
    • d) Making it as general as possible for flexibility
    • Answer: c
  4. What is "chain-of-thought" prompting?

    • a) Asking multiple unrelated questions in sequence
    • b) Asking the LLM to explain its reasoning process
    • c) Providing the LLM with chains of data
    • d) Connecting multiple LLMs together
    • Answer: b
  5. How should educators approach LLM outputs according to best practices?

    • a) Accept all outputs without modification
    • b) Never use LLM-generated content
    • c) Use outputs as first drafts to critically evaluate, modify, and refine
    • d) Only use outputs if they're perfect the first time
    • Answer: c
Rating
0 0

There are no comments for now.

to be the first to leave a comment.