langchain4j-spring-boot-integration

giuseppe-trisciuoglio/developer-kit · updated Apr 8, 2026

$npx skills add https://github.com/giuseppe-trisciuoglio/developer-kit --skill langchain4j-spring-boot-integration
0 commentsdiscussion
summary

Spring Boot auto-configuration and declarative AI services for LangChain4j integration.

  • Provides property-based configuration for multiple AI providers (OpenAI, Azure, Ollama) with Spring Boot starters and automatic bean wiring
  • Enables interface-based AI service definitions using @AiService annotations combined with message templates and Spring dependency injection
  • Supports RAG systems through configurable embedding stores (pgvector, Neo4j, Pinecone) and document ingestion pipelines
skill.md

LangChain4j Spring Boot Integration

Integrate LangChain4j with Spring Boot using declarative AI Services, auto-configuration, and Spring Boot starters. Configure AI model beans, set up chat memory, implement RAG pipelines with Spring Data, and build production-ready AI applications.

When to Use

Use this skill when:

  • Integrating LangChain4j into existing Spring Boot applications
  • Building AI-powered microservices with Spring Boot
  • Configuring AI model beans with @Bean annotations
  • Setting up auto-configuration for AI models and services
  • Creating declarative AI Services with Spring dependency injection
  • Implementing RAG systems with Spring Data integrations
  • Setting up chat memory with Spring context management
  • Configuring multiple AI providers (OpenAI, Azure, Ollama, Anthropic)
  • Building production-ready AI applications with Spring Boot

Overview

LangChain4j Spring Boot integration provides declarative AI Services through Spring Boot starters, enabling automatic configuration of AI components based on properties. Combine Spring dependency injection with LangChain4j's AI capabilities using interface-based definitions with annotations.

Instructions

1. Add Dependencies

<!-- Core LangChain4j Spring Boot Starter -->
<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-spring-boot-starter</artifactId>
    <version>1.8.0</version>
</dependency>

<!-- OpenAI Spring Boot Starter -->
<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
    <version>1.8.0</version>
</dependency>

2. Configure Application Properties

# application.properties
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
langchain4j.open-ai.chat-model.temperature=0.7
langchain4j.open-ai.chat-model.timeout=PT60S
langchain4j.open-ai.chat-model.max-tokens=1000

Or using YAML:

langchain4j:
  open-ai:
    chat-model:
      api-key: ${OPENAI_API_KEY}
      model-name: gpt-4o-mini
      temperature: 0.7
      timeout: 60s
      max-tokens: 1000

3. Create Declarative AI Service

import dev.langchain4j.service.spring.AiService;

@AiService
public interface CustomerSupportAssistant {

    @SystemMessage("You are a helpful customer support agent for TechCorp.")
    String handleInquiry(String customerMessage);

    @UserMessage("Translate to {{language}}: {{text}}")
    String translate(String text, String language);
}

4. Enable Component Scanning

@SpringBootApplication
@ComponentScan(basePackages = {
    "com.yourcompany",
    "dev.langchain4j.service.spring"
})
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

5. Inject and Use the AI Service

@Service
public class CustomerService {

    private final CustomerSupportAssistant assistant;

    public CustomerService(CustomerSupportAssistant assistant) {
        this.assistant = assistant;
    }

    public String processCustomerQuery(String query) {
        return assistant.handleInquiry(query);
    }
}

6. Verify the Integration

After setup, verify the configuration:

  1. Start the application and check logs for LangChain4jSpringBootAutoConfiguration activation
  2. Confirm AI service beans are registered: look for CustomerSupportAssistant in Spring context
  3. Test the service: invoke assistant.handleInquiry("test") and verify a response is returned

Configuration

Property-Based Configuration: Configure AI models through application.properties for different providers.

Manual Bean Configuration: For advanced configurations, define beans manually:

@Configuration
public class AiConfig {

    @Bean
    public ChatModel chatModel(@Value("${OPENAI_API_KEY}") String apiKey) {
        return OpenAiChatModel.builder()
            .apiKey(apiKey)
            .modelName("gpt-4o-mini")
            .temperature(0.7)
            .build();
    }
}

Multiple Providers: Use explicit wiring when configuring multiple AI providers:

@AiService(wiringMode = WiringMode.EXPLICIT)
interface MultiProviderAssistant {
    @AiServiceAnnotation
    ChatModel openAiModel;

    @AiServiceAnnotation
    ChatModel azureModel;
}

Declarative AI Services

Basic AI Service: Create interfaces with @AiService annotation and define methods with message templates.

Streaming AI Service: Implement streaming responses using Project Reactor:

@AiService
public interface StreamingAssistant {
    @SystemMessage("You are a helpful assistant.")
    Flux<String> chatStream(String message);
}

Chat Memory: Set up conversation memory with Spring context:

@AiService
public interface ConversationalAssistant {
    @SystemMessage("You are a helpful assistant with memory.")
    String chat(@MemoryId String userId, String message);
}

RAG Implementation

Embedding Stores: Configure embedding stores for RAG pipelines with Spring Data:

@Configuration
public class RagConfig {

    @Bean
    public EmbeddingStore<TextSegment> embeddingStore() {
        return PgVectorEmbeddingStore.builder()
            .host("localhost")
            .port(5432)
            .database("vectordb")
            .table("embeddings")
            .dimension(1536)
            .build();
    }

    @Bean
    public EmbeddingModel embeddingModel() {
        return OpenAiEmbeddingModel.withApiKey(System.getenv("OPENAI_API_KEY"));
    }
}

@AiService
public interface RagAssistant {
    String answer(@UserMessage("Question: {{question}}") String question);
}

Document Ingestion: Use ContentInjector and DocumentSplitter for processing documents. Content Retrieval: Configure EmbeddingStoreContentRetriever for knowledge augmentation.

Tool Integration

Spring Component Tools: Define tools as Spring components:

@Component
public class Calculator {
    @Tool("Calculate the sum of two numbers")
    public double add(double a, double b) {
        return a + b;
    }
}

@AiService
public interface MathAssistant {
    String solve(String problem);
}

Examples

Basic AI Service

@AiService
public interface ChatAssistant {
    @SystemMessage("You are a helpful assistant.")
    String chat(String message);
}

AI Service with Memory

@AiService
public interface ConversationalAssistant {
    @SystemMessage("You are a helpful assistant with memory of conversations.")
    String chat(@MemoryId String userId, String message);
}

AI Service with Tools

@Component
public class WeatherService {
    @Tool("Get weather for a city")
    public String getWeather(String city) {
        return "Sunny, 22°C in " + city;
    }
}

@AiService
public interface WeatherAssistant {
    String getWeatherForCity(String city);
}

For more examples (including RAG configurations, streaming assistants, and multi-provider setups), refer to references/examples.md.

Best Practices

  • Use Property-Based Configuration: External configuration over hardcoded values
  • Use Profiles: Separate configurations for development, testing, and production
  • Add Proper Logging: Debug AI service calls and monitor performance
  • Implement Retry Mechanisms: Handle transient failures with backoff strategies
  • Monitor Token Usage: Track token consumption and implement limits

References

For detailed API references and advanced configurations:

Constraints and Warnings

  • Store API keys securely using environment variables or secret management systems
  • AI model responses are non-deterministic; tests should account for variability
  • Rate limits may apply to AI providers; implement proper retry and backoff strategies
  • Memory providers store conversation history; implement cleanup for multi-user scenarios
  • Token costs accumulate quickly; monitor usage and implement token limits
  • Streaming responses require proper error handling for partial failures
  • Check provider-specific documentation for supported features
  • Use explicit wiring mode when multiple chat models are configured
  • Validate AI-generated outputs before use in production systems

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.653 reviews
  • Soo Ndlovu· Dec 28, 2024

    We added langchain4j-spring-boot-integration from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Lucas Wang· Dec 20, 2024

    Useful defaults in langchain4j-spring-boot-integration — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Min Farah· Dec 16, 2024

    langchain4j-spring-boot-integration is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Diya Khanna· Nov 23, 2024

    I recommend langchain4j-spring-boot-integration for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Henry Sharma· Nov 19, 2024

    langchain4j-spring-boot-integration fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Isabella Nasser· Nov 11, 2024

    langchain4j-spring-boot-integration is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Sofia Abebe· Nov 7, 2024

    Useful defaults in langchain4j-spring-boot-integration — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Sofia Garcia· Oct 26, 2024

    Registry listing for langchain4j-spring-boot-integration matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Mateo Robinson· Oct 14, 2024

    Solid pick for teams standardizing on skills: langchain4j-spring-boot-integration is focused, and the summary matches what you get after install.

  • Ishan Park· Oct 10, 2024

    langchain4j-spring-boot-integration has been reliable in day-to-day use. Documentation quality is above average for community skills.

showing 1-10 of 53

1 / 6