Resource Logic Client/Server Development Services
Autoconfiguration Using Wireless Mesh Networking
Most uses of Wireless Mesh Networking (IEEE 802.15.4) are for instrumentation, data collection, or low data rate command transmission. The scenarios painted for use of this protocol are most often for home automation, factory automation, building monitoring, or remote control. In such uses certain assumptions are made about the individual nodes and the environment:
An alternative use of mesh networks is to act as a backup for validating the operational status and configuration of a ‘hard-wired’ network installation.
A hypothetical pizza factory with five processing machines:
is used for illustration. In this hypothetical factory the entire process is automated. The machines are situated so that the finished product of one step are immediately carried over as a raw material input to the next step. These machines communicate to each other and to a supervisory console via hard-wired Ethernet.
A number of things that can go wrong, and some of the things that go wrong aren’t discoverable by a traditional hardwired network. Among these:
The mesh network in this environment acts to aid in recovery when other things aren’t working.
The engineers designing the system set up each node so that it is able to query it’s immediate upstream and downstream nodes. This is done with a highly directional IR link, which only returns a valid status when the receiver on one is physically aligned with the sender on another. If the machines are not finding their neighbors in an appropriate manner, they issue fault status and do not operate.
At the time that the machines are built, they are programmed to carry out their respective tasks on the receipt of ‘orders’. An ‘order’ specifies a recipe, the quantities of each ingredient, and the steps by which they are combined and processed. They are also programmed to issue a ‘heartbeat’ through the wired network so that the supervisory console can confirm that it is in communication with the entire installation. If the heartbeat stops, the supervisory console displays a fault condition and operations are suspended. If an individual machine has gone offline, the supervisory console can display a status for that point in the process. If the Ethernet switch goes offline, the console has no information at all, even if the machines themselves are still operational.
The addition of a mesh node to each machine gives the supervisory console a ‘back channel’. In such circumstances, it can query status of each node even if physical wiring is broken, power has failed, equipment is moved or dislocated, or the controller in a given machine has quit functioning. Furthermore, the mesh node is programmed in advance with the configuration settings put in the each machine during the design and installation phases. The combination of these circumstances makes it possible for a technician to home in on a particular box or board in short order. If a component that contains configuration settings has to be replaced, the mesh node acts as a backup repository for those settings and can forward those to the new component immediately.
This creates the means for diagnosing far more complex problems as the number of nodes proliferates, which would be the case in a much larger plant. If the plant in question is dealing with flammable or explosive materials, the wireless nodes can communicate status in the event of fire or explosion, so that plant managers and first responders can pare down the list of possible problem areas. This is particularly helpful when it is necessary to shut down processing nodes in a particular order, such that the inputs from one node stop flowing into the next node before that node is, in turn, brought to a halt.
During the engineering phases, the mesh nodes can be programmed with settings that indicate what should be upstream and downstream of the particular node. As the equipment installation progresses, the mesh network can report the presence and signal strength of any neighbors, indicating that either desired items are missing, the wrong equipment is ‘next door’, or to some extent that the physical connections are incorrect. This creates the potential for substantially increasing deployment speed of the plant. In this respect the processing nodes have a ‘mind of their own’, and ‘know in advance’ how they’re supposed to be set up.
In circumstances where plant equipment (particularly dies, jigs, and fixtures) are shifted in and out of the production stream, having the ability to report even when ‘shelved’ makes management much simpler. For some bulk products the plant may be physically re-plumbed for the particular product, particularly if the product is new, or the inputs are highly variable and unpredictable. The probability of incorrect wiring combined with incorrect plumbing creates the potential for significant waste, if not outright hazard.
Probably the least envisioned scenario for setting up such a network would be in a server farm. Given, say, 100 servers, 10 to a rack and 10 racks in the farm, along with their associated routers, UPSs, air conditioners, and other incidental hardware, an administrator might be ‘watching’ upwards of 250 separate pieces of hardware. The most likely point of failure is a server, particularly hard drives and power supplies. If a communication fault develops on the network, it is highly desirable to check on the status of servers independently of the hard-wired network. If a server is non-functioning it isn’t able to report anything to a management console, however a mesh node built into the server might report that a power supply has failed or the server shut itself down due to overheating.
Such a system of embedded controllers makes it possible for the designers, builders, and engineers to ‘enforce’ the appropriate use of their product by plant operators and technicians. This can have the effect of reducing cost, increasing the speed of initial installation, adding flexibility to manufacturing or process operations, and improving overall reliability.