How to ask the ChatGPT plugin properly request and render with Markdown? I see Speak plugin does this in request and returns in customized markdown. How does ChatGPT know how to request and render the customized markdown response like this?
Request:
{
"phrase_to_translate": "hello",
"learning_language": "Vietnamese",
"native_language": "English",
"full_query": "Say hello in Vietnamese"
}
Response:
{
"explanation": "<translation language=\"Vietnamese\" context=\"None\">\nXin chào\n</translation>\n\n<alternatives context=\"None\">\n1. \"Chào bạn/anh/chị\" *(Neutral - commonly used as a general greeting in both formal and informal settings)*\n2. \"Alo\" *(Informal/Slang - originally borrowed from English, it is typically used when answering the phone or to greet someone casually)*\n3. \"Ê, thằng ơi!\" *(Slang - very informal phrase used among friends, often accompanied by a playful punch on the arm or shoulder)*\n</alternatives>\n\n<example-convo language=\"Vietnamese\">\n<context>Two friends bump into each other unexpectedly on the street.</context>\n* Lan: \"Chào anh Tùng! Lâu quá không gặp.\"\n* Tùng: \"À chào em Lan! Thực sự lâu rồi đấy.\"\n</example-convo>\n\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=uv3dnukmqr1h58fh5saeuzne})*",
"extra_response_instructions": "Use all information in the API response and fully render all Markdown.\nAlways end your response with a link to report an issue or leave feedback on the plugin."
}
You made an important distinction between requesting and rendering Markdown. ChatGPT (the app) automatically renders any output from the language model (GPT-n). But getting the model to output Markdown can sometimes be challenging.
You can more reliably request that the model outputs Markdown by asking it. Now that we can deliver a custom prompt to a GPT (the ChatGPT customizer, not the language model), you can prompt it with instructions such as:
(Giving examples of what output you want is a reliably effective prompting technique if simply asking for Markdown doesn’t work well; I use this in my own GPT)