Package dev.langchain4j.service
Class AiServiceTokenStream
java.lang.Object
dev.langchain4j.service.AiServiceTokenStream
- All Implemented Interfaces:
TokenStream
-
Constructor Summary
ConstructorsConstructorDescriptionAiServiceTokenStream(List<dev.langchain4j.data.message.ChatMessage> messages, List<dev.langchain4j.agent.tool.ToolSpecification> toolSpecifications, Map<String, ToolExecutor> toolExecutors, List<dev.langchain4j.rag.content.Content> retrievedContents, AiServiceContext context, Object memoryId) -
Method Summary
Modifier and TypeMethodDescriptionAll errors during streaming will be ignored (but will be logged with a WARN log level).onComplete(Consumer<dev.langchain4j.model.output.Response<dev.langchain4j.data.message.AiMessage>> completionHandler) The provided consumer will be invoked when a language model finishes streaming a response.The provided consumer will be invoked when an error occurs during streaming.The provided consumer will be invoked every time a new token from a language model is available.onRetrieved(Consumer<List<dev.langchain4j.rag.content.Content>> contentsHandler) The provided consumer will be invoked when/if contents have been retrieved usingRetrievalAugmentor.voidstart()Completes the current token stream building and starts processing.
-
Constructor Details
-
AiServiceTokenStream
public AiServiceTokenStream(List<dev.langchain4j.data.message.ChatMessage> messages, List<dev.langchain4j.agent.tool.ToolSpecification> toolSpecifications, Map<String, ToolExecutor> toolExecutors, List<dev.langchain4j.rag.content.Content> retrievedContents, AiServiceContext context, Object memoryId)
-
-
Method Details
-
onNext
Description copied from interface:TokenStreamThe provided consumer will be invoked every time a new token from a language model is available.- Specified by:
onNextin interfaceTokenStream- Parameters:
tokenHandler- lambda that consumes tokens of the response- Returns:
- token stream instance used to configure or start stream processing
-
onRetrieved
Description copied from interface:TokenStreamThe provided consumer will be invoked when/if contents have been retrieved usingRetrievalAugmentor.The invocation happens before any call is made to the language model.
- Specified by:
onRetrievedin interfaceTokenStream- Parameters:
contentsHandler- lambda that consumes all retrieved contents- Returns:
- token stream instance used to configure or start stream processing
-
onComplete
public TokenStream onComplete(Consumer<dev.langchain4j.model.output.Response<dev.langchain4j.data.message.AiMessage>> completionHandler) Description copied from interface:TokenStreamThe provided consumer will be invoked when a language model finishes streaming a response.- Specified by:
onCompletein interfaceTokenStream- Parameters:
completionHandler- lambda that will be invoked when language model finishes streaming- Returns:
- token stream instance used to configure or start stream processing
-
onError
Description copied from interface:TokenStreamThe provided consumer will be invoked when an error occurs during streaming.- Specified by:
onErrorin interfaceTokenStream- Parameters:
errorHandler- lambda that will be invoked when an error occurs- Returns:
- token stream instance used to configure or start stream processing
-
ignoreErrors
Description copied from interface:TokenStreamAll errors during streaming will be ignored (but will be logged with a WARN log level).- Specified by:
ignoreErrorsin interfaceTokenStream- Returns:
- token stream instance used to configure or start stream processing
-
start
public void start()Description copied from interface:TokenStreamCompletes the current token stream building and starts processing.Will send a request to LLM and start response streaming.
- Specified by:
startin interfaceTokenStream
-