Running ChatGPT programmatically - How to continue conversation without re-submitting all past messages?

50 views Asked by At

One can obtain a ChatGPT response to a prompt using the following example:

from openai import OpenAI

client = OpenAI()  # requires key in OPEN_AI_KEY environment variable

completion = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
    {"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
  ]
)

print(completion.choices[0].message.content)

How can one continue the conversation? I've seen examples saying you just add a new message to the list of messages and re-submit:

# Continue the conversation by including the initial messages and adding a new one
continued_completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
        {"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."},
        {"role": "assistant", "content": initial_completion.choices[0].message.content},  # Include the initial response
        {"role": "user", "content": "Can you elaborate more on how recursion can lead to infinite loops if not properly handled?"}  # New follow-up prompt
    ]
)

But I would imagine this means processing the previous messages all over again at every new prompt, which seems quite wasteful. Is that really the only way? Isn't there a way to keep a "session" of some sort that keeps ChatGPT's internal state and just processes a newly given prompt?

1

There are 1 answers

2
Yilmaz On

from here

Conversation Summary

Now let's take a look at using a slightly more complex type of memory

  • ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.

In a conversational context where you want to maintain continuity and provide context to OpenAI's model, you need to send some form of conversation history. This history helps the model understand the ongoing dialogue and generate responses that are contextually relevant.