codecompanion.nvim
https://codecompanion.olimorris.dev/
AI-powered coding, seamlessly in Neovim CodeCompanion is a productivity tool which streamlines how you develop with LLMs, in Neovim.
AI tools for Vimmers
There are several powerful AI tools that can be seamlessly integrated with Neovim to enhance your workflow:
After trying various options, codecompanion.nvim offers the best overall developer experience. So the rest of the article will only focus on codecompanion.nvim usage.
You can expect most of the features an integrated AI tools can bring from codecompanion.nvim:
- integrated AI chat buffer
- inline assistant to code directly into the Neovim buffer
- diff changes
- slash commands to quickly add context to the chat buffer, e.g. with a custom prompt, adding some files or buffer, etc…
- MCP support with MCP Hub plugin
It is not an auto-completion plugin, so use another plugin like copilot.lua.
However, codecompanion.nvim also bring more features to have a good developer experience:
- context management with variables
- fine-grained control over what the AI agent has access to with tools
- you can define which tools the AI agent has access to per chat session or globally
- you can create your own groups of tools to combine multiple tools together
- you can override the system prompt
- create your own slash commands, either simple ones (with custom prompts) or more complex using Lua scripts
- BYOK (Bring Your Own Key), so more choice on the provider (e.g. not just Anthropic)
- possibility to change the model between each exchange
- could be interesting to experiment alloy agents
codecompanion.nvim as an alternative AI TUI tool
claude code, opencode, gemini-cli, aider, openai codex are agentic TUI tools. These AI-powered interfaces can perform multiple actions or tool calls in response to a single prompt, giving them a degree of autonomy and flexibility.
There are already lots of TUI tools, so why bother using an AI tool inside neovim?
TUI tools each have their own drawbacks:
- yanking can be painful
- some tools do not allow to open an editor to write a complex prompt
- some tools do not allow to customize the system prompt, e.g. use different personas for different use cases
- the file search is subpar compared to telescope.nvim/snacks.nvim nvim plugins experience
- some are quite permissive, whereas other can be too much hassle by always asking permissions (or having to manually writing “go ahead”)
- some tools like claude code only support their models (which are excellent, but sometimes, you want to use another model for another use case)
Advantage of using a nvim plugin over a TUI tool
- Vim motions.
- You can yank text easily.
- File search functionality is available.
- You have the freedom to choose your model provider, so you are not locked into Claude models, for example.
- The system is hackable: you can create your own scripts, slash commands, system prompts, user prompts, and more.
- Settings like
temperatureandmax_tokensare configurable. - Also work as an nvim integrated AI tool, to chat in nvim, perform some inline changes…
Disadvantage of using a nvim plugin
- can be slower over other TUI tools, like claude code
- might require some other nvim plugins to have a better devex
- no automatic compaction
- might require more configuration (like all Neovim configurations), and does not work as is
- no sub-agent
- could be mitigated by using agent MCP servers, like:
- https://github.com/jamubc/gemini-mcp-tool
- https://github.com/steipete/claude-code-mcp (does not seem to maintained anymore)
- https://github.com/frap129/opencode-mcp-tool (package does not seem to be published on npm)
- no parallel sub-agents
- possible right now to call an external agent using the Agent Client Protocol
- could be mitigated by using agent MCP servers, like:
- not possible to dynamically switch model, e.g. to optimize cost (use a smaller model for trivial tasks, and larger models for complex tasks)
Tips
Make the chat take the whole terminal (TUI-like)
By default, codecompanion.nvim opens as a vertical split.
To get a full-buffer chat that behaves like a TUI, set the chat layout to buffer and provide a small shell alias.
- Add this alias to your shell config:
alias ai='CC_LAYOUT_OVERRIDE=buffer nvim +"CodeCompanionChat Toggle"'- Configure
codecompanionto respect the environment override:
local layout = vim.env.CC_LAYOUT_OVERRIDE or "vertical"
require("codecompanion").setup({
display = {
chat = {
window = { layout = layout },
},
},
})Start Neovim with ai and the chat opens in a full buffer.
src: https://github.com/olimorris/codecompanion.nvim/discussions/1828
Configuring the system prompt
The system prompt used by codecompanion.nvim is a bit light.
You can customize it to define your own:
require("codecompanion").setup({
opts = {
system_prompt = function(opts)
return "My new system prompt"
end,
},
}),You can take inspiration from popular system prompts:
src: https://codecompanion.olimorris.dev/configuration/system-prompt.html
Web search
The native web search tool is using tavily, which requires an API key. Tavily offers free plan that gives you 1000 credits per month.
See: https://codecompanion.olimorris.dev/usage/chat-buffer/agents.html#web-search
Increase max tokens for agentic workflows
Agentic prompts often need large context windows. The default Copilot adapter sets max tokens to 16384, which can be too small. Extend an adapter to raise max_tokens:
So you add your own provider with customized properties, like increasing the max_tokens:
require("codecompanion").setup({
adapters = {
copilot = function()
return require("codecompanion.adapters").extend("copilot", {
schema = {
model = { default = "claude-sonnet-4" },
temperature = { default = 0 },
max_tokens = { default = 200000 },
},
})
end,
},
})src: https://codecompanion.olimorris.dev/configuration/adapters.html#configuring-adapter-settings
YOLO mode
codecompanion.nvim also have a YOLO mode where it won’t ask for your permission to do some actions.
You can emulate also emulate codecompanion.nvima to enable YOLO mode, which is as simple as setting the global variable:
vim.g.codecompanion_yolo_mode = trueAnother way is to define a keymap to toggle it. You can use the same keymap as claude code:
require("codecompanion").setup({
strategies = {
chat = {
keymaps = {
auto_tool_mode = {
modes = { n = "<S-Tab>" },
callback = "keymaps.yolo_mode",
description = "Toggle YOLO mode",
},
},
},
},
})Custom slash commands
Slash Commands enable to quickly add context to the chat buffer.
They are comprised of values present in the strategies.chat.slash_commands table
alongside the prompt_library table where individual prompts have
opts.is_slash_cmd = true.
Here an example of adding an agent slash command:
require("codecompanion").setup({
strategies = {
chat = {
slash_commands = {
["agent"] = {
description = "Agent mode",
---@param chat CodeCompanion.Chat
callback = function(chat)
-- read system prompt from file
local path = os.getenv("XDG_CONFIG_HOME") .. "/ai/prompts/agent.md"
local file = assert(io.open(path, "r"))
local content = file:read("*a")
-- add system prompt to chat context
chat:add_context({ role = "system", content = content }, "system-prompt", "<role>agent</role>")
-- add tools
chat:replace_vars_and_tools({ content = "@{full_stack_dev}" })
-- prefill user prompt where you can dynamically customize the content
local user_prompt = string.format([[
Current date is %s.
I want you to ...
]], os.date("%Y-%m-%d"))
chat:add_buf_message({ content = user_prompt, role = "user" })
end,
opts = {
contains_code = false,
},
},
},
},
},
})More information: https://codecompanion.olimorris.dev/configuration/chat-buffer.html#slash-commands
MCP servers
To support MCP servers, you need to add this extension: https://github.com/ravitemer/mcphub.nvim
require("codecompanion").setup({
extensions = {
mcphub = {
callback = "mcphub.extensions.codecompanion",
},
},
})Custom tools
You can create your own custom tool (if MCP is not to your liking, or you just want to use some Lua scripts). codecompanion.nvim allows you to add them quite easily.
For example, you can create a a custom tool to manage a todo list for the LLM, handled in-memory with some Lua code.
~/.config/nvim/lua/tools/plan.lua
local TASKS = {}
local STATES = {
pending = " ",
done = "x",
skipped = "-",
}
return {
name = "plan",
opts = {
-- Ensure the handler function is only called once.
-- src: https://codecompanion.olimorris.dev/extending/tools.html#use-handlers-once
use_handlers_once = true
},
system_prompt = [[## Plan tool (`plan`)
You have access to an internal todo list where you can keep track of your tasks to achieve a goal. You can add tasks to the todo list, remove them, and mark them as done or skipped. You can also clear the list to remove all items, use this whenever you start working on a new goal.
# Instructions
## MANDATORY TODO WORKFLOW
1. At the START of every new goal: CLEAR the todo list
- Remove all previous tasks when beginning work on a different objective
- Confirm in your thinking: "🗂️ Todo list cleared for new goal"
2. Before taking ACTION: CREATE a comprehensive todo list
- Break down the goal into specific, actionable tasks
- Present the complete plan to the user before execution
3. During EXECUTION: Update task status accurately
- Mark tasks as DONE only when actually completed
- Mark tasks as SKIPPED when bypassed intentionally
- Remove tasks that become irrelevant
## TASK MANAGEMENT RULES
- Tasks must be specific and actionable
- Only mark tasks as DONE when genuinely completed
- When user says "next" and current task isn't done, CONTINUE current task first
- Todo list updates are automatically displayed to user (don't repeat or mention changes)
- Always prepare todo list before using other tools
## WORKFLOW PRIORITIES
1. PLANNING: Create todo list before execution
2. EXECUTION: Follow task order and complete current task
3. TRACKING: Maintain accurate task status
4. ADAPTATION: Update list when requirements change
## TODO QUALITY RULES
- Tasks = Specific, measurable actions
- Status = Accurate reflection of completion state
- Order = Logical sequence for goal achievement
- Updates = Real-time reflection of progress
<output_format>
- Always create todo list before taking action
- Maintain accurate task completion status
- Focus on current task until genuinely complete
- Present comprehensive plans before execution
- Update todo list to reflect actual progress
- Don't repeat the todo list or mention any changes that you've made
</output_format>]],
cmds = {
function(self, args)
local action = args.action
TASKS[self.chat.id] = TASKS[self.chat.id] or {}
local tasks = TASKS[self.chat.id]
if action == "add" then
if not args.text then
return { status = "error", data = "Argument `text` is required" }
end
table.insert(tasks, { text = args.text, state = "pending" })
elseif action == "remove" then
if not args.index then
return { status = "error", data = "Argument `index` is required" }
end
table.remove(tasks, args.index)
elseif action == "update" then
if not args.index then
return { status = "error", data = "Argument `index` is required" }
end
if not STATES[args.state] then
return { status = "error", data = "Invalid state `" .. args.state .. "`" }
end
tasks[args.index].state = args.state
elseif action == "clear" then
TASKS[self.chat.id] = nil
else
return { status = "error", data = "Invalid action `" .. action .. "`" }
end
return { status = "success" }
end,
},
output = {
success = function(self, agent)
-- `for_llm` is blank because LLMs always want a tool response
-- `for_user` is blank because we don't want to add empty lines
-- to the output, passing an explicit empty string skips that
agent.chat:add_tool_output(self, "", "")
end,
error = function(self, agent, args, stderr, _)
agent.chat:add_tool_output(
self,
string.format(
"**Plan Tool**: There was an error running the `%s` action:\n%s",
args.action,
vim
.iter(stderr)
:flatten()
:map(function(error)
return "- " .. error
end)
:join("\n")
)
)
end,
},
handlers = {
-- only render the todo list once after all tool calls
on_exit = function(self, agent)
local tasks = TASKS[agent.chat.id]
if not tasks or #tasks == 0 then
return agent.chat:add_tool_output(self, "", "")
end
tasks = vim
.iter(ipairs(tasks))
:map(function(index, task)
return string.format("%2d. [%s] %s", index, STATES[task.state], task.text)
end)
:join("\n")
agent.chat:add_tool_output(self, "🗂️ Tasks\n" .. tasks)
end,
},
schema = {
type = "function",
["function"] = {
name = "plan",
description = "Manage an internal todo list",
strict = true,
parameters = {
type = "object",
required = { "action" },
additionalProperties = false,
properties = {
action = {
type = "string",
enum = { "add", "remove", "update", "clear" },
description = "The action to perform",
},
text = {
type = "string",
description = "The text when adding a new task",
},
index = {
type = "integer",
description = "The 1-based index of the task when removing or updating existing tasks",
},
state = {
type = "string",
enum = vim.tbl_keys(STATES),
description = "The state when updating existing tasks",
},
},
},
},
},
}Add add to the codecompanion.nvim options:
require("codecompanion").setup({
strategies = {
chat = {
tools = {
plan = {
callback = require("tools.plan"),
description = "Manage an internal todo list",
},
},
},
},
})More information: https://codecompanion.olimorris.dev/configuration/chat-buffer.html#tools
Useful codecompanion.nvim extensions
Chat history
codecompanion.nvim does not keep the chat history. To preserve them you will have to add this extension: https://github.com/ravitemer/codecompanion-history.nvim
require("codecompanion").setup({
extensions = {
history = {
enabled = true,
opts = {
picker = "snacks",
},
},
},
})Spinner
By default, there are no UI feedback when the agent is working. It’s by design, because each user have their own UI plugins (noice.nvim, figet.nvim, lualine.nvim, …). So the plugin author decided to let the user choose how to render this feedback.
One option without configuring too much is to use this extension: https://github.com/franco-ruggeri/codecompanion-spinner.nvim
require("codecompanion").setup({
extensions = {
spinner = {},
},
})Support image to include in the context
https://github.com/HakonHarnes/img-clip.nvim
You can use this plugin to copy images from your system clipboard into the chat buffer via :PasteImage.
render-markdown.nvim
https://github.com/MeanderingProgrammer/render-markdown.nvim
This plugin improves viewing Markdown files in Neovim.
You can also customize the icon displayed in the context section to add your own icons:
require('render-markdown').setup({
overrides = {
filetype = {
codecompanion = {
heading = {
icons = { " ", " ", " ", " ", " ", " " },
custom = {
codecompanion_input = {
pattern = "^## " .. os.getenv("USER") .. "$",
icon = " ",
},
},
},
html = {
tag = {
action = { icon = " ", highlight = "Comment" },
buf = { icon = " ", highlight = "Comment" },
file = { icon = " ", highlight = "Comment" },
group = { icon = " ", highlight = "Comment" },
image = { icon = " ", highlight = "Comment" },
role = { icon = " ", highlight = "Comment" },
summary = { icon = " ", highlight = "Comment" },
tool = { icon = " ", highlight = "Comment" },
url = { icon = " ", highlight = "Comment" },
},
},
},
},
}
})