Streaming Support
The framework supports streaming responses from LLMs (Ollama and Gemini), allowing for real-time feedback in your applications.
Usage
To enable streaming, pass stream=True to the flow.process_turn() method.
Note
Currently, enabling streaming will print the chunks directly to stdout as they arrive. This provides immediate visual feedback in CLI applications but does not yet return a generator object to the caller.