Class AiServices<T>
- Type Parameters:
T- The interface for which AiServices will provide an implementation.
AiServices will provide an implementation for it.
Currently, AI Services support:
- Static system message templates, configured via @SystemMessageannotation on top of the method - Dynamic system message templates, configured viasystemMessageProvider(Function)- Static user message templates, configured via @UserMessageannotation on top of the method - Dynamic user message templates, configured via method parameter annotated with @UserMessage- Single (shared)ChatMemory, configured viachatMemory(ChatMemory)- Separate (per-user)ChatMemory, configured viachatMemoryProvider(ChatMemoryProvider)and a method parameter annotated with @MemoryId- RAG, configured viacontentRetriever(ContentRetriever)orretrievalAugmentor(RetrievalAugmentor)- Tools, configured viatools(List)ortools(Object...)and methods annotated with @Tool- Various method return types (output parsers), see more details below - Streaming (useTokenStreamas a return type) - Structured prompts as method arguments (see @StructuredPrompt) - Auto-moderation, configured via @Moderateannotation
Here is the simplest example of an AI Service:
interface Assistant {
String chat(String userMessage);
}
Assistant assistant = AiServices.create(Assistant.class, model);
String answer = assistant.chat("hello");
System.out.println(answer); // Hello, how can I help you today?
The return type of methods in your AI Service can be any of the following: - aStringor anAiMessage, if you want to get the answer from the LLM as-is - aList<String>orSet<String>, if you want to receive the answer as a collection of items or bullet points - anyEnumor aboolean, if you want to use the LLM for classification - a primitive or boxed Java type:int,Double, etc., if you want to use the LLM for data extraction - many default Java types:Date,LocalDateTime,BigDecimal, etc., if you want to use the LLM for data extraction - any custom POJO, if you want to use the LLM for data extraction. - Result<T> if you want to accessTokenUsageor sources (Contents retrieved during RAG), aside from T, which can be of any type listed above. For example: Result<String>, Result<MyCustomPojo> For POJOs, it is advisable to use the "json mode" feature if the LLM provider supports it. For OpenAI, this can be enabled by callingresponseFormat("json_object")during model construction.
Let's see how we can classify the sentiment of a text:
enum Sentiment {
POSITIVE, NEUTRAL, NEGATIVE
}
interface SentimentAnalyzer {
@UserMessage("Analyze sentiment of {{it}}")
Sentiment analyzeSentimentOf(String text);
}
SentimentAnalyzer assistant = AiServices.create(SentimentAnalyzer.class, model);
Sentiment sentiment = analyzeSentimentOf.chat("I love you");
System.out.println(sentiment); // POSITIVE
As demonstrated, you can put @UserMessage and @SystemMessage annotations above a method to define
templates for user and system messages, respectively.
In this example, the special {{it}} prompt template variable is used because there's only one method parameter.
However, you can use more parameters as demonstrated in the following example:
interface Translator {
@SystemMessage("You are a professional translator into {{language}}")
@UserMessage("Translate the following text: {{text}}")
String translate(@V("text") String text, @V("language") String language);
}
See more examples here.
-
Field Summary
Fields -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionabstract Tbuild()Constructs and returns the AI Service.static <T> AiServices<T> Begins the construction of an AI Service.chatLanguageModel(dev.langchain4j.model.chat.ChatLanguageModel chatLanguageModel) Configures chat model that will be used under the hood of the AI Service.chatMemory(dev.langchain4j.memory.ChatMemory chatMemory) Configures the chat memory that will be used to preserve conversation history between method calls.chatMemoryProvider(ChatMemoryProvider chatMemoryProvider) Configures the chat memory provider, which provides a dedicated instance ofChatMemoryfor each user/conversation.contentRetriever(dev.langchain4j.rag.content.retriever.ContentRetriever contentRetriever) Configures a content retriever to be invoked on every method call for retrieving relevant content related to the user's message from an underlying data source (e.g., an embedding store in the case of anEmbeddingStoreContentRetriever).static <T> TCreates an AI Service (an implementation of the provided interface), that is backed by the provided chat model.static <T> Tcreate(Class<T> aiService, dev.langchain4j.model.chat.StreamingChatLanguageModel streamingChatLanguageModel) Creates an AI Service (an implementation of the provided interface), that is backed by the provided streaming chat model.moderationModel(dev.langchain4j.model.moderation.ModerationModel moderationModel) Configures a moderation model to be used for automatic content moderation.protected voidstatic List<dev.langchain4j.data.message.ChatMessage> removeToolMessages(List<dev.langchain4j.data.message.ChatMessage> messages) retrievalAugmentor(dev.langchain4j.rag.RetrievalAugmentor retrievalAugmentor) Configures a retrieval augmentor to be invoked on every method call.retriever(dev.langchain4j.retriever.Retriever<dev.langchain4j.data.segment.TextSegment> retriever) Deprecated.streamingChatLanguageModel(dev.langchain4j.model.chat.StreamingChatLanguageModel streamingChatLanguageModel) Configures streaming chat model that will be used under the hood of the AI Service.systemMessageProvider(Function<Object, String> systemMessageProvider) Configures the system message provider, which provides a system message to be used each time an AI service is invoked.toolProvider(ToolProvider toolProvider) Configures the tool provider that the LLM can useConfigures the tools that the LLM can use.Configures the tools that the LLM can use.tools(Map<dev.langchain4j.agent.tool.ToolSpecification, ToolExecutor> tools) Configures the tools that the LLM can use.static voidverifyModerationIfNeeded(Future<dev.langchain4j.model.moderation.Moderation> moderationFuture)
-
Field Details
-
DEFAULT
- See Also:
-
context
-
-
Constructor Details
-
AiServices
-
-
Method Details
-
create
public static <T> T create(Class<T> aiService, dev.langchain4j.model.chat.ChatLanguageModel chatLanguageModel) Creates an AI Service (an implementation of the provided interface), that is backed by the provided chat model. This convenience method can be used to create simple AI Services. For more complex cases, please usebuilder(java.lang.Class<T>).- Parameters:
aiService- The class of the interface to be implemented.chatLanguageModel- The chat model to be used under the hood.- Returns:
- An instance of the provided interface, implementing all its defined methods.
-
create
public static <T> T create(Class<T> aiService, dev.langchain4j.model.chat.StreamingChatLanguageModel streamingChatLanguageModel) Creates an AI Service (an implementation of the provided interface), that is backed by the provided streaming chat model. This convenience method can be used to create simple AI Services. For more complex cases, please usebuilder(java.lang.Class<T>).- Parameters:
aiService- The class of the interface to be implemented.streamingChatLanguageModel- The streaming chat model to be used under the hood. The return type of all methods should beTokenStream.- Returns:
- An instance of the provided interface, implementing all its defined methods.
-
builder
Begins the construction of an AI Service.- Parameters:
aiService- The class of the interface to be implemented.- Returns:
- builder
-
chatLanguageModel
public AiServices<T> chatLanguageModel(dev.langchain4j.model.chat.ChatLanguageModel chatLanguageModel) Configures chat model that will be used under the hood of the AI Service.Either
ChatLanguageModelorStreamingChatLanguageModelshould be configured, but not both at the same time.- Parameters:
chatLanguageModel- Chat model that will be used under the hood of the AI Service.- Returns:
- builder
-
streamingChatLanguageModel
public AiServices<T> streamingChatLanguageModel(dev.langchain4j.model.chat.StreamingChatLanguageModel streamingChatLanguageModel) Configures streaming chat model that will be used under the hood of the AI Service. The methods of the AI Service must return aTokenStreamtype.Either
ChatLanguageModelorStreamingChatLanguageModelshould be configured, but not both at the same time.- Parameters:
streamingChatLanguageModel- Streaming chat model that will be used under the hood of the AI Service.- Returns:
- builder
-
systemMessageProvider
Configures the system message provider, which provides a system message to be used each time an AI service is invoked.
When both@SystemMessageand the system message provider are configured,@SystemMessagetakes precedence.- Parameters:
systemMessageProvider- AFunctionthat accepts a chat memory ID (a value of a method parameter annotated with @MemoryId) and returns a system message to be used. If there is no parameter annotated with@MemoryId, the value of memory ID is "default". The returnedStringcan be either a complete system message or a system message template containing unresolved template variables (e.g. "{{name}}"), which will be resolved using the values of method parameters annotated with @V.- Returns:
- builder
-
chatMemory
Configures the chat memory that will be used to preserve conversation history between method calls.Unless a
ChatMemoryorChatMemoryProvideris configured, all method calls will be independent of each other. In other words, the LLM will not remember the conversation from the previous method calls.The same
ChatMemoryinstance will be used for every method call.If you want to have a separate
ChatMemoryfor each user/conversation, configurechatMemoryProvider(dev.langchain4j.memory.chat.ChatMemoryProvider)instead.Either a
ChatMemoryor aChatMemoryProvidercan be configured, but not both simultaneously.- Parameters:
chatMemory- An instance of chat memory to be used by the AI Service.- Returns:
- builder
-
chatMemoryProvider
Configures the chat memory provider, which provides a dedicated instance ofChatMemoryfor each user/conversation. To distinguish between users/conversations, one of the method's arguments should be a memory ID (of any data type) annotated withMemoryId. For each new (previously unseen) memoryId, an instance ofChatMemorywill be automatically obtained by invokingChatMemoryProvider.get(Object id). Example:interface Assistant { String chat(@MemoryId int memoryId, @UserMessage String message); }If you prefer to use the same (shared)ChatMemoryfor all users/conversations, configure achatMemory(dev.langchain4j.memory.ChatMemory)instead.Either a
ChatMemoryor aChatMemoryProvidercan be configured, but not both simultaneously.- Parameters:
chatMemoryProvider- The provider of aChatMemoryfor each new user/conversation.- Returns:
- builder
-
moderationModel
public AiServices<T> moderationModel(dev.langchain4j.model.moderation.ModerationModel moderationModel) Configures a moderation model to be used for automatic content moderation. If a method in the AI Service is annotated withModerate, the moderation model will be invoked to check the user content for any inappropriate or harmful material.- Parameters:
moderationModel- The moderation model to be used for content moderation.- Returns:
- builder
- See Also:
-
tools
Configures the tools that the LLM can use.- Parameters:
objectsWithTools- One or more objects whose methods are annotated withTool. All these tools (methods annotated withTool) will be accessible to the LLM. Note that inherited methods are ignored.- Returns:
- builder
- See Also:
-
tools
Configures the tools that the LLM can use.- Parameters:
objectsWithTools- A list of objects whose methods are annotated withTool. All these tools (methods annotated withTool) are accessible to the LLM. Note that inherited methods are ignored.- Returns:
- builder
- See Also:
-
toolProvider
Configures the tool provider that the LLM can use- Parameters:
toolProvider- Decides which tools the LLM could use to handle the request- Returns:
- builder
-
tools
Configures the tools that the LLM can use.- Parameters:
tools- A map ofToolSpecificationtoToolExecutorentries. This method of configuring tools is useful when tools must be configured programmatically. Otherwise, it is recommended to use theTool-annotated java methods and configure tools with thetools(Object...)andtools(List)methods.- Returns:
- builder
-
retriever
@Deprecated public AiServices<T> retriever(dev.langchain4j.retriever.Retriever<dev.langchain4j.data.segment.TextSegment> retriever) Deprecated.Deprecated. UsecontentRetriever(ContentRetriever)(e.g.EmbeddingStoreContentRetriever) instead.
Configures a retriever that will be invoked on every method call to fetch relevant information related to the current user message from an underlying source (e.g., embedding store). This relevant information is automatically injected into the message sent to the LLM.- Parameters:
retriever- The retriever to be used by the AI Service.- Returns:
- builder
-
contentRetriever
public AiServices<T> contentRetriever(dev.langchain4j.rag.content.retriever.ContentRetriever contentRetriever) Configures a content retriever to be invoked on every method call for retrieving relevant content related to the user's message from an underlying data source (e.g., an embedding store in the case of anEmbeddingStoreContentRetriever). The retrieved relevant content is then automatically incorporated into the message sent to the LLM.
This method provides a straightforward approach for those who do not require a customizedRetrievalAugmentor. It configures aDefaultRetrievalAugmentorwith the providedContentRetriever.- Parameters:
contentRetriever- The content retriever to be used by the AI Service.- Returns:
- builder
-
retrievalAugmentor
Configures a retrieval augmentor to be invoked on every method call.- Parameters:
retrievalAugmentor- The retrieval augmentor to be used by the AI Service.- Returns:
- builder
-
build
Constructs and returns the AI Service.- Returns:
- An instance of the AI Service implementing the specified interface.
-
performBasicValidation
protected void performBasicValidation() -
removeToolMessages
-
verifyModerationIfNeeded
public static void verifyModerationIfNeeded(Future<dev.langchain4j.model.moderation.Moderation> moderationFuture)
-