# Crystal Mesh Dashboard: A Web-Based Control Plane for a Multi-Radio 802.11s Testbed **Draft (Joplin-compatible Markdown)** **Authors:** Tim Brockmann; Michael Rethfeldt; Benjamin Beichler; Frank Golatowski; Christian Haubelt; *[add/adjust]* **Affiliation:** Institute of Applied Microelectronics and Computer Engineering, University of Rostock, Germany **Keywords:** WLAN mesh network, IEEE 802.11s, testbed management, multi-radio, time synchronization, dashboard --- ## Abstract Operating multi-hop IEEE 802.11s testbeds at scale requires frequent reconfiguration, reliable monitoring, and safe operational control. We present the *Crystal Mesh* dashboard, a web-based control plane designed for the Multi‑Mesh testbed, a miniaturized multi‑radio 5×5 WLAN mesh platform targeting Industry 4.0 requirements. The dashboard unifies live telemetry, mesh configuration, backup/restore workflows, PXE boot control, and batch node management. It integrates MQTT-based telemetry with SSH-based control actions and script-backed configuration, enabling fast, repeatable experiments while reducing operator overhead. We outline the dashboard architecture, core modules, and operational workflows, and discuss current limitations and future extensions. --- ## I. Introduction Modern industrial wireless systems demand low latency and high reliability, motivating the use of IEEE 802.11s mesh networks for flexible, fault-tolerant communication. The Multi‑Mesh testbed (5×5 nodes, multi‑radio, miniaturized geometry) provides an experimental platform to evaluate multi‑path strategies and time synchronization techniques for Industry 4.0 applications. However, daily operation of such a testbed is complex: nodes must be reconfigured, monitored, rebooted, or reinitialized; PXE boot settings and experiment scripts must be updated; and topology, channel, and radio parameters must be adjusted frequently. To address these operational challenges, we developed the *Crystal Mesh* dashboard, a web interface that consolidates telemetry, configuration, and control. The dashboard reduces manual SSH workflows, improves visibility, and enables reproducible experiments through script-backed configuration. **Contributions:** - Centralized, web-based management for WMN operations. - Integrated control and telemetry across MQTT/SSH paths. - Script-backed configuration and provisioning workflows. - Operator-focused UX for rapid, safe reconfiguration. --- ## II. Testbed Context (Multi‑Mesh) The Multi‑Mesh testbed is a miniaturized, multi‑radio IEEE 802.11s platform with a 5×5 node grid. It supports simultaneous transmission on orthogonal channels, enabling evaluation of redundancy, fault tolerance, and multi‑path strategies. The platform targets low‑latency industrial communication and includes time synchronization mechanisms (e.g., Wi‑PTP extensions) for precise timing evaluation. Each node provides two Wi‑Fi radios, adjustable power, and configurable topology/placement in a reduced‑scale environment. This dashboard builds on the Multi‑Mesh testbed to simplify daily operations and enable rapid configuration changes across the grid. --- ## III. System Overview **Architecture.** The dashboard follows a thin‑client architecture with a web UI, a server backend, and node-side agents: - **Web UI:** Cockpit, B&R, PXE, NetConf pages. - **Server:** Serves UI, mediates MQTT telemetry and SSH commands, persists configuration state. - **Node agents:** MQTT publisher, shell scripts for configuration and boot processes. **Data paths:** - **Telemetry:** MQTT → server → WebSocket → UI. - **Control:** UI → server → SSH (batch commands) and MQTT (node actions). **Configuration sources:** - `init_node.sh`: mesh configuration (topology, channel, TX power, noise floor). - `pxelinux.cfg/default`: PXE boot label control. - `apu_backup_plan.sh`: backup/restore switches and versions. --- ## IV. Dashboard Design and Features ### A. Cockpit (Real‑Time Monitoring) - Mesh grid with node health, neighbor sets, and path views. - RTT/latency display from node-side ping statistics. - Command line with presets and node selection overlay for batch actions. - Optional OS status check and visual state badges. ### B. Backup & Restore (B&R) - Script-backed action toggles mapped to `apu_backup_plan.sh` switches. - Version selection for kernel/backup/ROM from server-side paths. - *Manage Targets* grid for node selection with persistent state across clients. - Batch reboot actions with status feedback (green/red markers). ### C. PXE Boot Configuration - Safe PXE label switching with current config preview. - Comment/uncomment handling to ensure only one default label is active. ### D. NetConf (Mesh Configuration) - Topology, channel, TX power, and noise floor for mesh0/mesh1. - Live values read from `init_node.sh` with defaults stored in project config. - Channel dropdown derived from driver frequency table. ### E. Security & Access Control - Optional login enforcement and session handling. - Dashboard-level toggle for authentication requirement. --- ## V. Implementation Details - **Backend:** Node.js server, REST endpoints, WebSocket updates. - **Telemetry:** MQTT topics for node data, neighbors, and paths. - **Control:** SSH for batch commands; MQTT for node actions. - **Persistence:** Shared UI state (node selection, toggles) stored server‑side. - **Safety:** Rate-limited actions and explicit operator triggers for critical operations. --- ## VI. Operational Workflow Examples 1. **Topology change:** Update topology/channel in NetConf → write to `init_node.sh` → re-init nodes. 2. **Backup/restore:** Select versions and toggles → run script-backed workflow → monitor status. 3. **Batch recovery:** Select targets → SSH re-init or reboot → confirm feedback markers. --- ## VII. Discussion and Limitations - The dashboard prioritizes operator efficiency; formal performance evaluation is pending. - SSH control depends on node reachability during reboot cycles. - MQTT and WebSocket telemetry provide near‑real‑time status but rely on stable broker connectivity. - Future work: role‑based access control, experiment scheduling, automatic report generation, and integration of additional testbed KPIs. --- ## VIII. Conclusion The *Crystal Mesh* dashboard provides a unified control plane for managing a multi‑radio 802.11s WMN testbed. By integrating telemetry, configuration, and batch control, it reduces manual overhead and enables rapid experimentation on the Multi‑Mesh platform. --- ## Figures (Placeholders) - **Fig. 1:** System architecture and data/control paths. - **Fig. 2:** UI overview: Cockpit, B&R, PXE, NetConf. - **Fig. 3:** Manage Targets workflow (selection, command, feedback). --- ## References (Draft) - [1] T. Brockmann *et al.*, “Multi‑Mesh: A Miniaturized Multi‑Radio WLAN Mesh Testbed,” *IEMCON 2024*, University of Rostock. - [2] *Additional references to be added per IEEE format.*