Case Study: Wireless Data Collection

InfinityQS Blog
By InfinityQS Blog | October 6, 2011
Blog Author

It sounds like such a great concept – using wireless connections to import gage data into ProFicient SPC. But how to implement it? To follow is a case study of a bottling company that I recently visited.

The hardware aspect of the implementation consisted of MicroRidge’s wireless base and remote stations plus a serial to TCP/IP convertor. The data was then transferred to a server running DMS and Grid Provider, while the Data Convertor was configured to grab the data and store it in the Historian table. Sounds simple, right?

Let’s take a look at the steps needed to make this work. Each lab will have between 4 and 8 pieces of test equipment that will need to transfer their results to the Historian table. Additionally, each piece of test equipment outputs data using its own structure, some output more than one test result, and each lab will have just one port on one IP address to transfer all the data to the DMS server. To top things off, every lab has to transfer data the same way into the Historian table because each site is going to use the same database and the same project with which to collect their SPC data.

While challenging, some planning and a structured implementation plan will turn the individual test results into meaningful subgroup data within ProFicient. 

The first step is to understand what the raw data looks like from each piece of test equipment. Some equipment such as Ohaus balances, Hach pH meters, and Turbidity meters output just a single test result as part of their data streams. Others, like the Metrohm Titrator or the Anton-Paar analysis meters output multiple lines of data that appear as a report. Once we understand the nature of the raw data we can then use the software for the MicroRidge wireless system to make all the equipment send across the same type of data stream.

MicroRidge’s MobileCollect software allows a user to configure each remote wireless station to create this data consistency. Each remote station has up to three parsing routines that look for key words or phrases and then capture data from the stream after finding these key words. Additionally, as part of the remote setup, the user can configure the remote station to send its identification as part of the data stream, with each item in the stream separated by a character of the user’s choice. The user can also determine if the remote identification is before or after the test results. This is the first critical step to ensure that the system is successful because after the configuration, each time a device sends data the data stream is converted to “Remote ID, Test Result”.

Once the remotes are programmed, the base unit is connected to a workstation running InfinityQS' MobileCollect software to confirm the data streams from all the remotes. In the case of the Anton-Paar, multiple lines are viewed on the screen because this device sends out 3 separate test results. Once this confirmation is complete, the base unit is removed from the workstation and connected to the serial to TCP/IP convertor. The IP address of the convertor is selectable by the user and while the port identification can be changed, it is usually left at its default setting. Each lab will be identified by the IP address of its convertor.

The next step is to set up the Grid Provider of DMS to grab the data from the wireless system. With all the work done to parse the raw data, a single format can be used to grab the test results and put them into the DMS store. The format has 2 elements; one that captures the Remote ID and the other that captures the test result. While a configuration is needed for each TCP/IP convertor, i.e. each test lab, this one format can be used on each configuration. In each case, the Grid Provider configuration was named after the lab location to make the data easy to access with the Data Convertor. 

The Data Convertor of DMS is used to push the test results into the Historian table of the ProFicient database. The convertor names mimic the Grid Provider names in that they are based on the lab identification. The family items in each configuration are named after the test equipment, with each item in the family named after the test result name. Each item in the family was given two conditions to ensure data integrity; the data must be coming from the correct MicroRidge remote unit, and the value must be a new value. The measurement for Brix, the amount of sugar in a sample, had a third condition to ensure that the value was also greater than 1. This was added to prevent brix values from diet products, which by definition are not supposed to have sugar, from being added to the database. The converter is also setup to automatically send the results into the Historian table. The end user can define how often the Historian table is purged of data.

The final step is configuring the project used to collect the data. Each test that has a result that comes from a gage is set up to read the Historian table. The configuration of the test properties uses the corporate hierarchy of the database to limit the data that is available to each user. Additionally, the data is presented as a list from the specific piece of test equipment that measured the value with a 4 hour time limit. Once a value is selected from the list, it is removed. In most cases, only 1 or 2 results per gage are available and the time stamp is the primary key for selecting the desired result.

So we’ve transfered our data from the gage to the project using a wireless setup in conjunction with DMS and the Historian table. Now, despite the large quantity of gages in a multitude of different labs, not a single one is actually plugged into a workstation. Success! 

InfinityQS Fact Checking Standards

InfinityQS is committed to delivering content that adheres to the highest editorial standards for objective analysis, accuracy, and sourcing.

  • We have a zero-tolerance policy regarding any level of plagiarism or malicious intent from our writers and contributors.
  • All referenced articles, research, and studies must be from reputable publications, relevant organizations, or government agencies.
  • Where possible, studies, quotes, and statistics used in a blog article contain a reference to the original source. The article must also clearly indicate why any statistics presented are relevant.
  • We confirm the accuracy of all original insights, whether our opinion, a source’s comment, or a third-party source so as not to perpetuate myth or false statements.



Never miss a post. Sign up to receive a weekly roundup of the latest Quality Check blogs.