MQTT Explorer includes comprehensive debugging capabilities to help developers troubleshoot issues with MQTT connections, message decoding, and AI Assistant interactions.
AI Assistant Debug View
The AI Assistant includes a built-in debug panel that shows complete request/response information.
Enabling Debug Mode
Open AI Assistant
Click the AI Assistant icon in the sidebar or press the keyboard shortcut.
Click the debug icon
Look for the bug icon (🐛) in the AI Assistant header and click it to toggle debug view.
View debug information
The debug panel displays system messages, API requests, responses, and timing information.
Debug mode persists across sessions - toggle it off when you’re done debugging to reduce UI clutter.
Debug Output Structure
The debug view displays a comprehensive JSON structure:
{
"systemMessage" : {
"role" : "system" ,
"content" : "You are an expert AI assistant specializing in MQTT..." ,
"note" : "This is the system prompt that provides context to the LLM"
},
"messages" : [
{
"index" : 0 ,
"role" : "user" ,
"content" : "What does this topic do?" ,
"fullContent" : "Context: \n Topic: home/livingroom/light \n ..." ,
"timestamp" : "2026-01-30T13:20:15.123Z" ,
"proposals" : 0 ,
"questionProposals" : 0 ,
"apiDebug" : { /* ... */ }
}
],
"summary" : {
"totalMessages" : 2 ,
"messagesWithDebugInfo" : 1 ,
"lastApiCall" : "2026-01-30T13:20:15.123Z"
}
}
System Message
The system message contains the AI Assistant’s core instructions:
{
"role" : "system" ,
"content" : "You are an expert AI assistant specializing in MQTT..." ,
"note" : "This is the system prompt that provides context to the LLM"
}
Purpose:
Defines the AI’s expertise and behavior
Sets communication style (concise, technical, etc.)
Specifies response format rules
Lists supported MQTT ecosystems
Debugging Use:
If the AI gives incorrect or off-topic responses, review the system message to ensure the instructions are clear.
Message Array
Each conversation turn is logged with complete metadata:
Field Type Description indexnumber Message position in conversation rolestring "user" or "assistant"contentstring Display text (may be truncated) fullContentstring Complete message with context timestampstring ISO 8601 timestamp proposalsnumber Count of action proposals in response questionProposalsnumber Count of suggested follow-up questions apiDebugobject API request/response details (user messages only)
User messages include detailed API debugging data:
{
"apiDebug" : {
"provider" : "openai" ,
"model" : "gpt-5-mini" ,
"timing" : {
"duration_ms" : 1234 ,
"timestamp" : "2026-01-30T13:20:15.123Z"
},
"request" : {
"url" : "https://api.openai.com/v1/chat/completions" ,
"body" : {
"model" : "gpt-5-mini" ,
"messages" : [ /* ... */ ],
"max_completion_tokens" : 500
}
},
"response" : {
"id" : "chatcmpl-AbCdEfGh123456" ,
"model" : "gpt-5-mini" ,
"choices" : [ /* ... */ ],
"usage" : {
"prompt_tokens" : 156 ,
"completion_tokens" : 98 ,
"total_tokens" : 254
}
}
}
}
Request Details Full API request including URL, headers, and body
Response Data Complete API response with usage statistics
Timing Info Request duration and timestamp
Token Usage Prompt, completion, and total token counts
Server Console Output
The backend server logs detailed debugging information to the console.
Request Logging
When an AI request is sent:
================================================================================
LLM REQUEST (OpenAI)
================================================================================
Provider: openai
Model: gpt-5-mini
Messages Count: 2
Full Request Body:
{
model: 'gpt-5-mini',
messages: [
{
role: 'system',
content: 'You are an expert AI assistant specializing in MQTT...'
},
{
role: 'user',
content: 'Context:\nTopic: home/livingroom/light\n...'
}
],
max_completion_tokens: 500
}
System Message:
{
role: 'system',
content: 'You are an expert AI assistant specializing in MQTT...'
}
================================================================================
No Truncation : The server logs show complete objects with depth: null and maxArrayLength: null for full visibility.
Response Logging
When the AI responds:
================================================================================
LLM RESPONSE (OpenAI)
================================================================================
Duration: 1234 ms
Full Response:
{
id: 'chatcmpl-AbCdEfGh123456',
object: 'chat.completion',
created: 1738247815,
model: 'gpt-5-mini',
choices: [
{
index: 0,
message: {
role: 'assistant',
content: 'This topic represents a smart light in your living room...'
},
finish_reason: 'stop'
}
],
usage: {
prompt_tokens: 156,
completion_tokens: 98,
total_tokens: 254
},
system_fingerprint: 'fp_abc123def456'
}
================================================================================
================================================================================
LLM RPC HANDLER - Returning response
================================================================================
Response length: 456
Has debugInfo: true
================================================================================
Error Logging
When errors occur:
================================================================================
LLM RPC ERROR
================================================================================
Error message: Invalid API key configuration
Error stack: Error: Invalid API key configuration
at /home/runner/work/MQTT-Explorer/MQTT-Explorer/dist/src/server.js:642:15
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Full error: Error: Invalid API key configuration {
status: 401,
type: 'invalid_request_error',
code: 'invalid_api_key'
}
================================================================================
Production Considerations These verbose logs are designed for development. In production:
Use log levels (DEBUG, INFO, ERROR)
Sample requests (log 1% for monitoring)
Disable ANSI colors for log aggregation
Filter PII and API keys from logs
Browser Console Output
The frontend logs debug information to the browser console.
Normal Flow
LLM Service : Received result from backend : {
response : "This topic represents a smart light..." ,
debugInfo : {
provider : "openai" ,
model : "gpt-5-mini" ,
timing : { duration_ms : 1234 , timestamp : "2026-01-30T13:20:15.123Z" },
request : { url : "..." , body : { ... } },
response : { id : "chatcmpl-..." , usage : { ... } }
}
}
LLM Service : Has response : true
LLM Service : Has debugInfo : true
LLM Service : Assistant message length : 456
LLM Service : Debug info : { provider : "openai" , model : "gpt-5-mini" , ... }
Error Flow
LLM Service : Received result from backend : undefined
LLM Service : Has response : false
LLM Service : Has debugInfo : false
LLM Service : Invalid result from backend : undefined
AI Assistant error : Error : No response from AI assistant
at LLMService . sendMessage ( llmService . ts : 440 )
Error details : { message : "No response from AI assistant" }
Debugging Decoder Issues
When messages don’t decode correctly:
Check the topic pattern
Verify the topic matches the decoder’s canDecodeTopic pattern: // Sparkplug decoder expects this pattern:
/ ^ spBv1 \. 0 \/ [ ^ / ] + \/ [ ND ] ( DATA | CMD | DEATH | BIRTH ) \/ [ ^ / ] + ( \/ [ ^ / ] + ) ? $ / u
Inspect the raw payload
Switch to “hex” view to see the raw bytes: 0x73 0x70 0x42 0x76 0x31 0x2E 0x30 ...
Try different formats
Use the format dropdown to test different decoders:
String
Sparkplug
int8, uint32, float, etc.
Check for error messages
Look for warning icons (⚠️) next to format options in the dropdown. Hover to see the error message.
Review decoder implementation
Check the decoder code in app/src/decoders/ for logic errors.
Common Decoder Errors
Data type does not align with message
Cause : Binary payload length is not evenly divisible by the data type’s byte size.Example : Trying to decode 5 bytes as uint32 (needs 4-byte alignment).Solution :
Try uint8 to see individual bytes
Check if the payload is actually a different type
Verify the device is sending the expected format
Failed to decode sparkplugb payload
Cause : The payload is not valid Sparkplug B binary data.Solution :
Verify the topic matches the Sparkplug pattern
Check that the sender is using sparkplug-payload encoder
Try decoding with the Sparkplug test client to isolate issues
Cause : The string payload is not valid JSON.Solution :
View in “string” format instead of “json”
Check for trailing commas, unquoted keys, or single quotes
Verify the payload isn’t binary data misinterpreted as text
Use timing information to identify bottlenecks:
AI Assistant Response Time
// Check duration_ms in apiDebug
const { timing } = message . apiDebug
console . log ( `Response took ${ timing . duration_ms } ms` )
// Slow responses (>5s) may indicate:
// - Large context windows (many related topics)
// - Complex prompts
// - API rate limiting
// - Network latency
Token Usage Analysis
const { usage } = response
const efficiency = usage . completion_tokens / usage . prompt_tokens
// High prompt tokens indicate:
// - Too much context being sent
// - Need to reduce related topics count
// - Consider shorter system prompt
// High completion tokens indicate:
// - Verbose responses
// - Multiple proposals/questions
// - May need to tune max_completion_tokens
Log Levels and Filtering
MQTT Explorer logs can be filtered by level:
View all logs
View errors only
View LLM requests only
Save logs to file
Visual Separators
The console uses clear visual boundaries:
================================================================================
SECTION HEADER
================================================================================
Content...
================================================================================
Color Coding (in terminal):
Green : Strings
Yellow : Numbers and booleans
Gray : Null/undefined
Cyan : Object keys
Network Debugging
For MQTT connection issues:
Enable MQTT debug logs
Set the DEBUG environment variable:
Check connection statistics
Open Settings → Broker Statistics to view:
Connection state
Message counts
Subscription list
Error messages
Monitor network traffic
Use Wireshark or tcpdump to capture MQTT packets: tcpdump -i any -n port 1883 -w mqtt.pcap
Memory Debugging
If MQTT Explorer becomes slow or unresponsive:
Check message history size
Monitor tree node count
Check Electron memory usage
const historySize = treeNode . messageHistory . toArray (). length
if ( historySize > 1000 ) {
console . warn ( `Large message history: ${ historySize } messages` )
}
Memory Leaks Common causes:
Retained message history (grows unbounded)
Subscriptions to high-frequency topics
Decoder caching without cleanup
Event listeners not removed
Troubleshooting Checklist
AI Not Responding
Check browser console for errors
Verify API key in settings
Review server logs for API errors
Test with a simple question
Decoder Not Working
Verify topic pattern matches
Check payload in hex view
Look for warning icons
Test with known-good data
Performance Issues
Check message history size
Monitor token usage
Look for high-frequency topics
Profile with DevTools
Connection Problems
Enable MQTT debug logs
Check broker statistics
Verify broker is running
Test with mosquitto_sub
Debug Best Practices
Enable debug mode early
Turn on debug view before encountering issues to capture the full sequence of events.
Check multiple log sources
Issues may appear in browser console, server logs, or debug UI - check all three.
Use incremental testing
Test each component (connection, decoder, AI) separately to isolate issues.
Save debug output
Copy debug JSON or save logs to files for later analysis or bug reports.
Compare with working examples
Use the test suite’s mock clients to verify expected behavior.
See Also