Local AI with LangChain4j and Quarkus: Build an Email Task Extractor with Tool Calling
Harness the power of small LLMs running on your machine to automate email parsing and task extraction. Java-first, cloud-optional, and blazing fast with Quarkus.
This tutorial walks you through building a simple but powerful Quarkus application that uses two local LLMs (running via Ollama) to simulate a workflow: generating team emails, extracting actionable tasks from them using tool calling, and logging a simulated AI reply.
We’ll use:
phi3:mini
to generate realistic-looking internal emails.llama3
to process those emails, extract tasks with LangChain4j tool calling, and log results.
No external API calls. Everything runs locally. And if you want to jump ahead and just see it running, pull the full example from my GitHub.
Prerequisites
Make sure you have the following installed:
Java 11+
Maven 3.8+
Podman (for running Ollama, unless installed natively)
Your preferred Java IDE
TIP: If you are using a natively installed Ollama, pull the required models before you start working your way through the tutorial:
ollama pull phi3:mini
ollama pull llama3
Create Your Quarkus Project
Run this command to scaffold the application:
mvn io.quarkus.platform:quarkus-maven-plugin:3.22.1:create \
-DprojectGroupId=org.acme \
-DprojectArtifactId=ai-email-simulator \
-Dextensions="rest-jackson,langchain4j-ollama,scheduler"
cd ai-email-simulator
Configure Ollama and the LLMs
Open src/main/resources/application.properties
and configure both models. The application.properties file configures the behavior of the LangChain4j models used in the application:
#The timeout to wait for running requests to finish. quarkus.shutdown.timeout=2
# Generator - small, fast model
quarkus.langchain4j.generator.chat-model.model-name=phi3:mini
quarkus.langchain4j.generator.chat-model.temperature=0.8
quarkus.langchain4j.generator.chat-model.max-tokens=200
quarkus.langchain4j.generator.timeout=60s
# Processor - larger model with tool-calling
quarkus.langchain4j.processor.chat-model.model-name=llama3
quarkus.langchain4j.processor.chat-model.temperature=0.2
quarkus.langchain4j.processor.chat-model.max-tokens=100
quarkus.langchain4j.processor.timeout=120s
These settings optimize the generator for speed and creativity, while the processor is tuned for precision and tool-calling tasks. Let’s start with the generation of emails.
Email Generator Service
Create src/main/java/org/acme/EmailGeneratorService.java
:
package org.acme;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
@RegisterAiService(modelName = "generator")
public interface EmailGeneratorService {
@SystemMessage("""
Create a short internal email. Format it with:
- a subject line
- a body that either includes an action request or shares team information.
Keep the tone casual and professional.
Use no more than 100 words.
Do not use markdown. Do not explain the prompt.
Just return the email content starting with "Subject:"
""")
String generateEmail(@UserMessage String projectname);
}
The EmailGenerator uses the “Generator” model and creates an email to be classified and examined by the Processor later on. Note how we are using the @RegisterAiService(modelName = “generator“)
to reference the first model configuration in the application.properties.
Email Processor Service
Create src/main/java/org/acme/EmailProcessor.java
:
package org.acme;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
@RegisterAiService(modelName="processor", tools = TodoService.class)
public interface EmailProcessor {
@SystemMessage("""
You are an AI assistant processing incoming emails.
If the email contains a task, call the 'addTask'
tool with a concise description.
Respond ONLY with ACKNOWLEDGED or THANK_YOU.
""")
String processEmail(@UserMessage String email);
}
The EmailProcessor uses the processor model and gets a tool wired into. This is the next thing we need to implement.
Create the TodoService Tool
Create src/main/java/org/acme/TodoService.java
:
package org.acme;
import dev.langchain4j.agent.tool.Tool;
import jakarta.enterprise.context.ApplicationScoped;
import org.jboss.logging.Logger;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
@ApplicationScoped
public class TodoService {
private static final Logger log = Logger.getLogger(TodoService.class);
private final List<String> tasks = new CopyOnWriteArrayList<>();
@Tool("Adds a task to the Todo list based on the email content.")
public String addTask(String task) {
log.infof("AI TOOL: Adding task -> %s", task);
tasks.add(task);
return "Task added successfully: " + task;
}
public List<String> getTasks() {
return List.copyOf(tasks);
}
}
This will act as the AI tool the LLM can call when it identifies tasks in the emails.
Let’s wire everything together
Create src/main/java/org/acme/SimulationService.java
:
package org.acme;
import org.jboss.logging.Logger;
import io.quarkus.scheduler.Scheduled;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
@ApplicationScoped
public class SimulationService {
private static final Logger log = Logger.getLogger(SimulationService.class);
@Inject
EmailGeneratorService generator;
@Inject
EmailProcessor processor;
@Inject
TodoService todoService;
// This method will run every 10 seconds
@Scheduled(every = "15s", delayed = "5s") // optional delay before first run
public void runSimulation() {
log.info("--- Email Simulation Start ---");
// call the random email generator
String email = generator.generateEmail("Next Quarkus Release");
log.infof("Generated Email:\n%s", email);
// call the processor with the "incoming email"
String reply = processor.processEmail(email);
log.infof("AI Response: %s", reply);
log.infof("Todo List: %s", todoService.getTasks());
log.info("--- Email Simulation End ---");
}
}
The SimulationService is run as a @Scheduled task so we can see a little bit more going on in the log-file.
Run It
Let’s see what’s going on
./mvnw quarkus:dev
10. Check the Output
In your terminal, you’ll see logs like:
2025-05-03 17:39:35,004 INFO [org.acm.SimulationService] (vert.x-worker-thread-1) --- Email Simulation Start ---
2025-05-03 17:39:38,047 INFO [org.acm.SimulationService] (vert.x-worker-thread-1) Generated Email:
Subject: Upcoming Quarkus Release - Development Timeline
Hey Team,
We're approaching the next major release of Quarkus, and I wanted to touch base on our development timeline. Our target is to have the new features ready by mid-February, with a beta version available for testing before the final release in late March.
Please make sure to review the updated feature roadmap and prioritize your tasks accordingly. If you have any questions or concerns, don't hesitate to reach out.
Best,
[Your Name]
2025-05-03 17:39:39,489 INFO [org.acm.TodoService] (vert.x-worker-thread-1) AI TOOL: Adding task -> Review and prioritize tasks based on the updated feature roadmap for the upcoming Quarkus release
2025-05-03 17:39:40,038 INFO [org.acm.SimulationService] (vert.x-worker-thread-1) AI Response: ACKNOWLEDGED
2025-05-03 17:39:40,040 INFO [org.acm.SimulationService] (vert.x-worker-thread-1) Todo List: [Review the changelog for Quarkus 7.2 and get familiar with any breaking changes, Review and provide feedback on proposed features by this Friday, Review and prioritize tasks based on the updated feature roadmap for the upcoming Quarkus release]
2025-05-03 17:39:40,040 INFO [org.acm.SimulationService] (vert.x-worker-thread-1) --- Email Simulation End ---
Recap
You built a local-first AI application with:
Two different LLMs for generation and processing
LangChain4j tool calling
A minimal service-oriented Quarkus design
From here, you could extend it with REST endpoints, a web frontend, persistent storage and many other things.
What’s Next?
Now that you’ve built your first local AI simulation with Quarkus and LangChain4j, you’ve officially entered the world of practical, developer-friendly LLMs. What’s next?
Want to get your hands dirty and build something real? Try the full Quarkus + LangChain4j Workshop. It’s packed with step-by-step guidance, real-world tasks, and no fluff.
New to Quarkus and just want to see what the fuss is about? Kick things off here! Your future self will thank you for those fast dev loops and tiny containers.
Go ahead. Compile joy. Debug less. Quarkus is waiting.