Skip to main content

Overview

The AI Assistant is a beta feature that brings the power of Large Language Models (LLMs) directly into MQTT Explorer. Ask questions about your topics, get help understanding complex payloads, and receive intelligent suggestions for automations and actions—all based on your actual MQTT data.
The AI Assistant is currently in beta and requires server-side configuration. Contact your administrator if the feature is not available.

Key Features

Contextual Understanding

Automatically includes topic metadata, message history, and neighboring topics in every query

Suggested Questions

Get smart question suggestions based on the selected topic and its data

Action Proposals

Receive actionable MQTT message proposals you can send with one click

Conversation History

Maintains context across multiple questions for coherent conversations

Accessing the AI Assistant

The AI Assistant appears in the Details tab when you select a topic:
1

Connect to Broker

Establish a connection to your MQTT broker
2

Select a Topic

Click any topic in the tree view
3

Open Details Tab

Switch to the Details tab in the sidebar
4

Expand AI Assistant

Click the “AI Assistant” panel header to expand it
The AI Assistant panel is collapsible. Keep it expanded while exploring topics to see suggested questions for each new topic you select.

Configuration

Server-Side Setup

The AI Assistant uses a backend proxy architecture for security. API keys are configured on the server and never exposed to clients. Environment Variables
# Choose provider (openai or gemini)
export LLM_PROVIDER=openai

# Set API key (provider-specific or generic)
export OPENAI_API_KEY=sk-...
# or
export GEMINI_API_KEY=AIza...
# or
export LLM_API_KEY=...  # Generic fallback

# Optional: Adjust context size (default: 500 tokens)
export LLM_NEIGHBORING_TOPICS_TOKEN_LIMIT=500

# Start server
node dist/src/server.js
  1. Visit platform.openai.com/api-keys
  2. Sign up or log in to your OpenAI account
  3. Create a new API key
  4. Copy the key and set it in your server environment
Note: Using the AI Assistant consumes OpenAI API credits. Review pricing at openai.com/pricing.
  1. Visit aistudio.google.com/app/apikey
  2. Sign in with your Google account
  3. Create a new API key
  4. Copy the key and set it in your server environment
Note: Google Gemini offers a generous free tier. Review pricing at ai.google.dev/pricing.
If no LLM environment variables are configured, the AI Assistant will be completely hidden from the interface.

Supported Providers

OpenAI
  • Model: gpt-5-mini (latest GPT-4o mini with 400K context window)
  • Official SDK with automatic retries and timeout handling
  • Fast responses optimized for interactive use
Google Gemini
  • Model: gemini-1.5-flash-latest
  • Generous free tier for testing and small deployments
  • Direct API integration

Using the AI Assistant

Interactive Chat

Asking Questions
1

Type Your Question

Enter your question in the text field at the bottom of the AI Assistant panel
2

Send

Click the send icon or press Enter
3

View Response

The AI response appears as a chat message with timestamp
4

Continue Conversation

Ask follow-up questions to dive deeper
Keyboard Shortcuts
  • Enter: Send message
  • Shift + Enter: New line in message (multi-line input)

Suggested Questions

When you select a topic, the AI Assistant automatically generates contextual question suggestions: Default Suggestions (always available)
  • “Explain this data structure”
  • “What does this value mean?”
  • “Summarize all subtopics”
  • “What can I do with this topic?”
Smart Suggestions (generated based on topic content)
  • Auto-generated when you expand the AI Assistant
  • Context-aware based on topic type, message content, and neighbors
  • Updated when switching to a different topic
  • Up to 6 suggestions displayed
Click any suggested question chip to instantly send it—no typing required!

Context Intelligence

Every question you ask automatically includes rich context: Included Context
  • Current Topic: Full topic path and latest message value
  • Topic Metadata: Message count, subtopic count, retained status
  • Message History: Recent message history for the topic
  • Neighboring Topics: Related topics in the hierarchy:
    • Parent topics
    • Sibling topics
    • Child topics
    • Grandchild topics (one level down)
    • Cousin topics (siblings of parent)
Smart Truncation
  • Large message payloads are truncated with [content truncated]
  • Topic lists are limited to the most relevant neighbors
  • Total neighboring context capped at 500 tokens (configurable)
Neighboring topics provide essential context for understanding device relationships and automation opportunities. For example:
  • If you select home/bedroom/light/state, the assistant sees related topics like home/bedroom/light/brightness and home/bedroom/temperature
  • This allows the AI to suggest automations like “Turn on light when temperature drops below 18°C”
  • The 500-token limit (increased from 100) enables better multi-device and room-level proposals

AI Capabilities

Understanding Data

JSON Structure Explanation
Q: Explain this data structure
A: This is a JSON object containing device state information with three properties:
   - "state": A string indicating the current power state ("ON" or "OFF")
   - "brightness": A numeric value from 0-100 representing light intensity
   - "color": A nested object with RGB values (red, green, blue) for color control
Value Interpretation
Q: What does this value mean?
A: The value "23.5" on topic "home/bedroom/temp" represents the current 
   temperature reading in degrees Celsius from a temperature sensor in the bedroom.

MQTT Concepts

Ask about MQTT protocol features: Retained Messages
Q: Why is this message retained?
A: This message is retained because it represents the current state of the device. 
   Retained messages ensure new subscribers immediately receive the last known state 
   without waiting for the next update.
QoS Levels
Q: What QoS level should I use for this topic?
A: For this device command topic, I recommend QoS 1 (at least once delivery). 
   This ensures the command reaches the device even if there are temporary network 
   issues, while avoiding the overhead of QoS 2.

Action Proposals

The AI can generate actionable MQTT message proposals: Proposal Cards When the AI suggests an action, it appears as a card with:
  • Description: What the action does
  • Topic: Destination topic path
  • Payload: Message payload to send
  • Send Button: One-click publishing
Example Interaction
Q: How can I turn this light on?

[AI Response]
You can turn the light on by publishing to the command topic:

[Proposed Action Card]
Description: Turn on the bedroom light
Topic: home/bedroom/light/set
Payload: {"state":"ON"}
[Send Message Button]
Click “Send Message” on any proposal card to instantly publish the message to your broker—no need to manually copy/paste!

Follow-Up Questions

After receiving a response, the AI may suggest relevant follow-up questions: Question Proposal Chips
  • Appear below AI responses
  • Color-coded differently from initial suggestions
  • Can be categorized (e.g., “Automation”, “Troubleshooting”)
  • Click to ask the follow-up question
Example
[After explaining a temperature sensor]

Follow-up questions:
- "What's the normal range for this sensor?"
- "How can I get alerts when temperature is abnormal?"
- "Show me historical temperature data"

Conversation Management

Conversation History

The assistant maintains context across multiple questions: History Features
  • Last 10 messages kept in context
  • Enables coherent multi-turn conversations
  • References previous questions and answers
  • Understands pronouns like “this”, “it”, “that”
Example Conversation
You: What does this temperature sensor measure?
AI: It measures the ambient temperature in the bedroom...

You: What's the normal range?
AI: Based on typical bedroom temperatures, a normal range is 18-24°C...

You: How can I automate the heater based on this?
AI: You can create an automation that monitors this temperature sensor...

Clear Chat

Start a fresh conversation:
1

Click Clear Button

Click the clear icon (trash/X) next to the input field
2

History Cleared

All messages are removed and conversation context is reset
3

New Conversation

Start asking questions without reference to previous context
Clearing the chat also clears the internal conversation history, so the AI won’t remember previous context.

Auto-Clear on Topic Change

When you select a different topic:
  • Chat history is automatically cleared
  • New suggested questions are generated
  • Previous conversation context is reset
  • Prevents confusion between different topics

Debug Mode

For developers and advanced users:
1

Enable Debug

Click the bug icon in the AI Assistant header
2

View API Traffic

See complete request/response data from the LLM API
3

Inspect Context

Review system prompts, message history, and context data
Debug Information Includes
  • System message (prompt template)
  • All conversation messages with full content
  • API request details (provider, model, URL, body)
  • API response details (ID, timing, token usage)
  • Message summaries and statistics
Use debug mode to understand how context is provided to the AI or to troubleshoot unexpected responses.

Privacy & Security

Data Handling

What is Sent to LLM Provider
  • Topic paths from your MQTT broker
  • Message payloads (current and recent history)
  • Neighboring topic information
  • Your questions and conversation history
What is NOT Sent
  • MQTT broker credentials
  • Connection settings
  • API keys (handled server-side only)
  • Other users’ data
Be cautious when using the AI Assistant with topics containing sensitive or confidential information. Your data is sent to third-party LLM providers (OpenAI or Google).

Security Best Practices

Review Topic Data

Before asking questions, verify the topic doesn’t contain passwords, tokens, or PII

Use in Development

Test with non-production data first before using with live systems

Server-Side Keys

API keys are never exposed to browser—always configured server-side

Disable for Sensitive Data

Administrator can disable AI Assistant by not setting LLM environment variables

Rate Limiting

The service includes automatic handling for rate limits:
  • Retry logic with exponential backoff
  • 30-second timeout for requests
  • Error messages displayed when rate limits are hit
  • Suggested wait time before retrying

Troubleshooting

”LLM service not configured on server”

Cause: Server does not have API keys configured Solution: Contact your administrator to configure OPENAI_API_KEY, GEMINI_API_KEY, or LLM_API_KEY environment variable

”Invalid API key” or Authentication Errors

Solutions:
  • Verify the API key is correct in server configuration
  • Check that your LLM provider account is active
  • Ensure you have available API credits/quota
  • Restart the server after changing environment variables

”Rate limit exceeded”

Solutions:
  • Wait a few minutes before trying again
  • Check your usage dashboard (OpenAI or Google Cloud)
  • Consider upgrading your API plan for higher limits

”Request timeout”

Solutions:
  • Check server internet connectivity
  • Try asking a simpler question with less context
  • Verify LLM provider service status
  • Check server logs for detailed error messages

No Response or Unexpected Answers

Solutions:
  • Enable debug mode to inspect API requests/responses
  • Clear chat and start a new conversation
  • Try rephrasing your question more specifically
  • Ensure the selected topic has recent message data

Use Cases

Learn MQTT

Ask questions about MQTT concepts, QoS, retained messages, and best practices

Debug Devices

Understand why a device is sending unexpected data or not responding

Discover Automations

Get suggestions for smart home automations based on your topics

Decode Protocols

Get help understanding proprietary or complex message formats

Explore IoT Setup

Ask about device relationships and system architecture

Generate Commands

Get help crafting the correct MQTT messages to control devices

Limitations

The AI Assistant is a beta feature with some limitations:
  • Requires active internet connection
  • Requires valid API key with available credits
  • Responses limited to 500 tokens for performance
  • May not have knowledge of proprietary or custom MQTT implementations
  • Can occasionally provide incorrect information (always verify critical actions)
  • Context window is limited (large message histories may be truncated)

Future Enhancements

Planned improvements include:
  • Support for additional LLM providers (Anthropic Claude, Azure OpenAI, local models)
  • Ability to save and share helpful conversations
  • Integration with automation platforms (Home Assistant, Node-RED)
  • Custom prompt templates for specific use cases
  • Offline mode with cached responses
  • Multi-language support