Overview
The AI Assistant is a beta feature that brings the power of Large Language Models (LLMs) directly into MQTT Explorer. Ask questions about your topics, get help understanding complex payloads, and receive intelligent suggestions for automations and actions—all based on your actual MQTT data.The AI Assistant is currently in beta and requires server-side configuration. Contact your administrator if the feature is not available.
Key Features
Contextual Understanding
Automatically includes topic metadata, message history, and neighboring topics in every query
Suggested Questions
Get smart question suggestions based on the selected topic and its data
Action Proposals
Receive actionable MQTT message proposals you can send with one click
Conversation History
Maintains context across multiple questions for coherent conversations
Accessing the AI Assistant
The AI Assistant appears in the Details tab when you select a topic:Configuration
Server-Side Setup
The AI Assistant uses a backend proxy architecture for security. API keys are configured on the server and never exposed to clients. Environment VariablesGet OpenAI API Key
Get OpenAI API Key
- Visit platform.openai.com/api-keys
- Sign up or log in to your OpenAI account
- Create a new API key
- Copy the key and set it in your server environment
Get Google Gemini API Key
Get Google Gemini API Key
- Visit aistudio.google.com/app/apikey
- Sign in with your Google account
- Create a new API key
- Copy the key and set it in your server environment
If no LLM environment variables are configured, the AI Assistant will be completely hidden from the interface.
Supported Providers
OpenAI- Model:
gpt-5-mini(latest GPT-4o mini with 400K context window) - Official SDK with automatic retries and timeout handling
- Fast responses optimized for interactive use
- Model:
gemini-1.5-flash-latest - Generous free tier for testing and small deployments
- Direct API integration
Using the AI Assistant
Interactive Chat
Asking Questions
Keyboard Shortcuts
- Enter: Send message
- Shift + Enter: New line in message (multi-line input)
Suggested Questions
When you select a topic, the AI Assistant automatically generates contextual question suggestions: Default Suggestions (always available)- “Explain this data structure”
- “What does this value mean?”
- “Summarize all subtopics”
- “What can I do with this topic?”
- Auto-generated when you expand the AI Assistant
- Context-aware based on topic type, message content, and neighbors
- Updated when switching to a different topic
- Up to 6 suggestions displayed
Context Intelligence
Every question you ask automatically includes rich context: Included Context- Current Topic: Full topic path and latest message value
- Topic Metadata: Message count, subtopic count, retained status
- Message History: Recent message history for the topic
- Neighboring Topics: Related topics in the hierarchy:
- Parent topics
- Sibling topics
- Child topics
- Grandchild topics (one level down)
- Cousin topics (siblings of parent)
- Large message payloads are truncated with
[content truncated] - Topic lists are limited to the most relevant neighbors
- Total neighboring context capped at 500 tokens (configurable)
Why Include Neighboring Topics?
Why Include Neighboring Topics?
Neighboring topics provide essential context for understanding device relationships and automation opportunities. For example:
- If you select
home/bedroom/light/state, the assistant sees related topics likehome/bedroom/light/brightnessandhome/bedroom/temperature - This allows the AI to suggest automations like “Turn on light when temperature drops below 18°C”
- The 500-token limit (increased from 100) enables better multi-device and room-level proposals
AI Capabilities
Understanding Data
JSON Structure ExplanationMQTT Concepts
Ask about MQTT protocol features: Retained MessagesAction Proposals
The AI can generate actionable MQTT message proposals: Proposal Cards When the AI suggests an action, it appears as a card with:- Description: What the action does
- Topic: Destination topic path
- Payload: Message payload to send
- Send Button: One-click publishing
Follow-Up Questions
After receiving a response, the AI may suggest relevant follow-up questions: Question Proposal Chips- Appear below AI responses
- Color-coded differently from initial suggestions
- Can be categorized (e.g., “Automation”, “Troubleshooting”)
- Click to ask the follow-up question
Conversation Management
Conversation History
The assistant maintains context across multiple questions: History Features- Last 10 messages kept in context
- Enables coherent multi-turn conversations
- References previous questions and answers
- Understands pronouns like “this”, “it”, “that”
Clear Chat
Start a fresh conversation:Clearing the chat also clears the internal conversation history, so the AI won’t remember previous context.
Auto-Clear on Topic Change
When you select a different topic:- Chat history is automatically cleared
- New suggested questions are generated
- Previous conversation context is reset
- Prevents confusion between different topics
Debug Mode
For developers and advanced users:
Debug Information Includes
- System message (prompt template)
- All conversation messages with full content
- API request details (provider, model, URL, body)
- API response details (ID, timing, token usage)
- Message summaries and statistics
Privacy & Security
Data Handling
What is Sent to LLM Provider- Topic paths from your MQTT broker
- Message payloads (current and recent history)
- Neighboring topic information
- Your questions and conversation history
- MQTT broker credentials
- Connection settings
- API keys (handled server-side only)
- Other users’ data
Security Best Practices
Review Topic Data
Before asking questions, verify the topic doesn’t contain passwords, tokens, or PII
Use in Development
Test with non-production data first before using with live systems
Server-Side Keys
API keys are never exposed to browser—always configured server-side
Disable for Sensitive Data
Administrator can disable AI Assistant by not setting LLM environment variables
Rate Limiting
The service includes automatic handling for rate limits:- Retry logic with exponential backoff
- 30-second timeout for requests
- Error messages displayed when rate limits are hit
- Suggested wait time before retrying
Troubleshooting
”LLM service not configured on server”
Cause: Server does not have API keys configured Solution: Contact your administrator to configureOPENAI_API_KEY, GEMINI_API_KEY, or LLM_API_KEY environment variable
”Invalid API key” or Authentication Errors
Solutions:- Verify the API key is correct in server configuration
- Check that your LLM provider account is active
- Ensure you have available API credits/quota
- Restart the server after changing environment variables
”Rate limit exceeded”
Solutions:- Wait a few minutes before trying again
- Check your usage dashboard (OpenAI or Google Cloud)
- Consider upgrading your API plan for higher limits
”Request timeout”
Solutions:- Check server internet connectivity
- Try asking a simpler question with less context
- Verify LLM provider service status
- Check server logs for detailed error messages
No Response or Unexpected Answers
Solutions:- Enable debug mode to inspect API requests/responses
- Clear chat and start a new conversation
- Try rephrasing your question more specifically
- Ensure the selected topic has recent message data
Use Cases
Learn MQTT
Ask questions about MQTT concepts, QoS, retained messages, and best practices
Debug Devices
Understand why a device is sending unexpected data or not responding
Discover Automations
Get suggestions for smart home automations based on your topics
Decode Protocols
Get help understanding proprietary or complex message formats
Explore IoT Setup
Ask about device relationships and system architecture
Generate Commands
Get help crafting the correct MQTT messages to control devices
Limitations
- Requires active internet connection
- Requires valid API key with available credits
- Responses limited to 500 tokens for performance
- May not have knowledge of proprietary or custom MQTT implementations
- Can occasionally provide incorrect information (always verify critical actions)
- Context window is limited (large message histories may be truncated)
Future Enhancements
Planned improvements include:- Support for additional LLM providers (Anthropic Claude, Azure OpenAI, local models)
- Ability to save and share helpful conversations
- Integration with automation platforms (Home Assistant, Node-RED)
- Custom prompt templates for specific use cases
- Offline mode with cached responses
- Multi-language support
Related Features
- Message Inspection - Understanding message payloads
- Publishing - Sending AI-generated message proposals
- Topic Visualization - Navigating topic hierarchy