Reputation: 8200
The context is that I'm streaming LLM tokens from a model, and they're in Markdown, so I want to repeatedly append to the rendered Markdown.
This is roughly the code I'm using with bare text:
async for chunk in response.receive():
print(chunk.text, end='')
Which outputs:
# Document heading
Intro text
* A bullet point
* Another bullet point
But I want to render the markdown:
from IPython import display, Markdown
async for chunk in response.receive():
display(Markdown(chunk.text))
Since this outputs a markdown block with each call, there are breaks between each chunk (but with occasional formatting):
Document
heading
Intro
text
*
A
bullet point
*
Another
bullet point
Is there a way to do this naturally with the IPython
or other library? Or do I need to manually buffer and re-render the response?
Upvotes: 0
Views: 17