The Java Developer’s Guide to Agentic AI
What you need to know to build reasoning, tool-using LLM agents with Quarkus and LangChain4j.
If you've recently stumbled into the world of AI-powered applications, you've likely heard the term “agentic AI” thrown around. It sounds like marketing hype, or worse, like something from a sci-fi script. But agentic AI is very real, very practical, and surprisingly accessible. Especially if you’re a Java developer.
In this article, I’ll walk you through the key concepts behind agentic AI. I’ll explain what “tools” and “functions” really mean in LLM-speak, give you an overview of the Model Context Protocol (MCP), and show you how this all ties together using modern Java tools like Quarkus and LangChain4j.
Whether you're building a chatbot that answers tech support queries or crafting an AI teammate that helps developers debug code, this guide will give you the vocabulary and understanding to get started.
First, the Vocabulary
Let’s start by decoding the terms and definitions you’ll encounter. And let me give you a little warning. A lot of things in the agent and AI space are under active development. Which sounds like an understatement given the speed the industry and research is moving in this area right now. So, you will find many different definitions for agents, but let’s say this is a simple start for now.
Agent: A program (often LLM-powered) that can take actions toward a goal. Agents use reasoning steps and tools to solve tasks.
LLM (Large Language Model): A model like GPT-4, Mistral, or LLaMA that can generate text, answer questions, or decide what to do next.
Tool / Function calling: The mechanism that lets the model do something; like call a Java method, hit an API, or query a database.
RAG (Retrieval-Augmented Generation): Fetching data from external sources and injecting it into the model’s context for smarter answers.
MCP (Model Context Protocol): A way of structuring and tracking interactions with a model over time. Think of it as the "session manager" for LLMs.
LangChain4j: A Java-native framework for building AI applications with agents, tools, chains, and memory.
Quarkus: A supersonic Java framework ideal for building cloud-native, event-driven, and AI-infused applications.
What Makes AI “Agentic”?
You can think of agentic AI as giving a language model a toolbox and a goal. Rather than just responding passively to prompts, an agent can:
Interpret a task,
Decide which “tool” to use,
Call external functions or APIs,
Remember what happened earlier,
Adjust its plan if something goes wrong.
Imagine a travel assistant. When a user types, “Book me a hotel in Rome next weekend,” a non-agentic LLM might respond with something generic. An agentic AI, on the other hand, could:
Use a calendar tool to resolve “next weekend”,
Use a hotel search API to find availability,
Choose a hotel and call a booking service,
Return a confirmation with real data.
That’s the difference: action, not just conversation.
Getting Practical: The Java Stack for Agentic AI
If you’re working in Java, the ideal stack looks like this:
Quarkus as the application runtime,
LangChain4j for defining agents and tools,
MCP for structuring LLM interactions and enabling traceability,
Ollama or a cloud-hosted model (like OpenAI or Mistral) for LLM execution.
Define Tools as Annotated Java Methods
In Quarkus with LangChain4j, any public method annotated with @Tool
can be used by an agent. Here's a basic example:
@ApplicationScoped
public class HotelBookingTool {
@Tool("book hotel in city")
public String bookHotel(String city, String date) {
return "Hotel booked in " + city + " for " + date;
}
}
No glue code, no brittle API layers: Just plain Java methods that the agent can call.
Build an AI Service with Quarkus
The AI Service serves as the core connection point between your application and the LLM. It abstracts the LLM specifics, encapsulating and declaring all interactions within a singular interface. You can register your tool with the AiService:
@RegisterAiService(tools = HotelBookingTool.class)
public interface MyAiService {
// methods
}
Expose the AI Service via REST
You can expose an AI agent through a Quarkus REST endpoint:
@Path("/ask")
@ApplicationScoped
public class AgentResource {
@Inject
MyAiService agent;
@POST
@Consumes(MediaType.TEXT_PLAIN)
@Produces(MediaType.TEXT_PLAIN)
public String ask(String question) {
return agent.chat(question);
}
}
This turns your agent into a deployable microservice. And yes, this wasn’t even near a complete example. I haven’t talked about memory or the LLM configuration at all. But I wanted to show you the basics before I let you go linger off to check the official documentation!
Using MCP in Quarkus
The Model Context Protocol (MCP) is a specification for capturing the entire chain of interactions between the user, model, and tools. It allows you to:
Persist conversation state across requests,
Debug and trace tool calls,
Rewind or rehydrate sessions on demand.
In Quarkus, implementing an MCP server is straightforward. Start by creating a project with the right extensions:
quarkus create app org.acme:weather:1.0.0-SNAPSHOT \
--extensions="rest-client-jackson,qute,mcp-server-stdio" --no-code
Then, define a tool interface:
@RegisterRestClient(baseUri = "https://api.weather.gov")
public interface WeatherClient {
@GET
@Path("/alerts/active/area/{state}")
Alerts getAlerts(@RestPath String state);
}
You can plug this tool into an agent, and every call gets logged via MCP for traceability. And when I say, “plug into”, I mean, you can configure the MCP server as a client in Langchain4J via the application.properties in Quarkus.
quarkus.langchain4j.mcp.weather.transport-type=stdio
quarkus.langchain4j.mcp.weather.command=jbang,--quiet,org.acme:weather:1.0.0-SNAPSHOT:runner
There’s two sides to MCP. The server side where “tools” are written as a standalone component and integrated into your application via the MCP protocol. Check how to build a Quarkus MCP Server in the official Quarkus MCP server tutorial.
And you can learn more how to integrate a server into your application in the Quarkus + MCP with LangChain4j blog post.
Creating Powerful Agents
To create real powerful agent based systems it needs more than one tool. You also need to add the relevant prompts and potentially many more tools. A simple example using the ToolBox annotation and a system prompt can look like this:
@RegisterAiService(modelName = "openai")
public interface WeatherForecastAgent {
@SystemMessage("""
You are a meteorologist answering weather questions.
Keep responses brief and accurate.
""")
@ToolBox({CityExtractorAgent.class, WeatherForecastService.class})
String provideForecast(String userQuery);
}
With this setup, the model has access to weather and city services and can reason about the user’s intent before calling tools.
Read more about agent definition is available in the LangChain4j agent documentation.
Common Beginner Questions
“Why not just call Java methods directly?”
Because you want adaptability. Agentic AI isn’t hardcoded logic. It’s a reasoning layer on top of your methods. You give it flexibility and let it decide how to use your tools to fulfill open-ended requests. This simple example didn’t go into details about orchestration or workflows. But there is a great 2-part series on the Quarkus blog by Mario Fusco explaining the difference between workflow and agentic approaches.
“Isn’t this just RPA or scripting?”
Not quite. Traditional scripting follows static workflows. Agentic AI dynamically chooses tools and adapts based on context, user input, and memory.
“How is this different from a chatbot?”
A basic chatbot follows conversation patterns. An agent executes tasks, coordinates tools, and returns results. Ideally like a digital coworker being able to reason about steps and verify results.
What are some real use-cases?
Why should you even build agents? And what would typical use-cases look like. It is a little tricky to convey this because we are at the very beginning of understanding what agents could reliably do for us and where and how they would add value. I’ve written down some high level thoughts on Enterprise AI Use-Cases and how to evaluate them on LinkedIn. But let’s keep it very practical here:
1. Code Review Agent:
Feed it a PR diff. It uses CodeAnalyzerTool
to catch bugs and anti-patterns. It can even file a GitHub issue.
2. Sales Coach Agent:
Give it a call transcript. It evaluates sentiment, suggests next steps, and logs customer insights into your CRM.
3. DevOps Agent:
Ask, “What pods are failing in staging?” It queries your Kubernetes API and returns a status summary with potential root causes.
4. WildFly Agent:
Talk to your WildFly server instance in natural language and ask it about deployments and trouble shoot applications while it gives you solutions for problems it detects.
All of these are possible by combining Quarkus’ integration power with LangChain4j’s reasoning framework and MCP’s transparency.
More information about Quarkus, LLMs, agents and LangChain4j
The Quarkus team has written extensively on agent patterns and the underlying technology.
Part 1 introduces agents, tools, and LangChain4j basics.
Part 2 dives into chaining agents together (e.g., a weather bot using a geocoding bot).
Quarkus + MCP explains how to structure durable, debuggable interactions across sessions.
Building your own MCP Server shows you how to register tool servers that LLMs can call over HTTP or stdio.
The agentic future is here, and if you’re a Java developer, you’re already ahead.
Agentic AI isn’t just a buzzword anymore. It’s turning into a practical architecture. With LangChain4j and Quarkus, you don’t need to switch languages, learn Python, or rely on cloud vendors just to get started.
You already know Java. You already know how to write business logic and APIs. Now, you can give your applications structured reasoning abilities, tool access, and memory. And you are able to access all the relevant data stored in the established and reliable ERP systems of your customers.
Start small. Add an agent to your helpdesk tool. Create a data summarizer for internal teams. Build a DevOps assistant that runs commands for you.
Then let the agents scale from there.