Blog

General methodology of asset usage profiling for proactive maintenance prediction

When analysing sensor data, you are typically confronted with different challenges relating to data quality. Here, we show you how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.

Nowadays, especially with the advent of the Internet of Things (IoT), large quantities of sensor data are collected. Small sensors can be easily installed, on multipurpose industrial vehicles for instance, in order to measure a vast range of parameters. The collected data can serve many purposes, e.g. to predict system maintenance. However, when analysing it, you are typically confronted with different challenges relating to data quality, e.g. unrealistic or missing values, outliers, correlations and other typical and a-typical obstacles. The aim of this article is to show how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.

Within the MANTIS project, Sirris is developing a general methodology that can be used to explore sensor data from a fleet of industrial assets. The main goal of the methodology is to profile asset usages, i.e. define separate groups of usages that share common characteristics. This can help experts to identify potential problems, which are not visually observable, when the resulting profiles are compared with the expected behaviour of the assets and when anomalies are detected.

In this article, we will describe the methodology of asset usage profiling for proactive maintenance prediction. The data used in this article is confidential and anonymised; we therefore cannot describe it in detail. It mainly consists of duration and resource consumption as well as a range of parameters measured via different sensors. For our analysis, we used Jupyter Notebook with appropriate libraries such as pandas, scipy and scikit-learn.

Data preparation

Sometimes data can be polluted, as it is collected from different sources and can contain duplicates, wrong values, empties and outliers, which should all be considered carefully. Therefore, the first natural step is to conduct an initial exploration of the data and to prepare a single reference dataset for advanced analysis, by cleaning the data, by means of visual and statistical methods, then by selecting the right attributes you wish to work with further.

In our example dataset, we find negative or zero-resource consumption, a situation that is obviously impossible, as shown in Figure 1. In our case, since there are few outliers of this type, we simply remove them from the dataset.

Figure 1 Zero or negative consumption

Another possible example is that of an erroneous date in the data. For example, dates may be too old compared to the rest of your dataset; future dates can even exist. Your decision to maintain, fix or remove wrong instances can depend on many factors, such as how big your dataset is, whether an erroneous date is very important at the current stage, etc. In our case, we maintain these instances since, at this moment, the date is not important for analysis and the percentage of this subset is very low.

Outliers are extreme values that deviate sufficiently from other observations and also need to be dealt with carefully. They can be detected visually and using statistical means. Sometimes we can simply remove them, sometimes we want to analyse them thoroughly. Visualising the data directly reveals some potential outliers; refer to the point in the upper right-hand corner in Figure 2. In our case, such high values for duration and consumption are impossible, as shown in Figure 3. Since it is the first record for this type of asset, it may have been entered manually for test purposes; we consequently choose to remove it.

Figure 2 Visual check for outliers

Figure 3 Impossible data

In Figure 4, we can see a positive linear correlation between consumption and duration, which is to be expected, although we still may find some outliers using the 3-sigma rule. This rule states that, for the normal distribution, approximately 99.7 percent of observations lie within 3 standard deviations of the mean. Then, based on Chebyshev’s Inequality, even in the case of non-normally distributed data, at least 88.8 percent of cases fall within 3-sigma intervals. Thus, we consider observations beyond 3-sigmas as outliers.

Figure 4 Data after cleaning

In Figure 5, we see that our data is quite normal, centred around 0, most values lying between -2 and 2. This means that the 3-sigma rule will show us more accurate results. You must normalise your data before applying this rule.

Figure 5 Distribution of normalised consumption/s

Results are shown in Figure 6. The reason for such a significant deviation from the average in consumption and duration of certain usages is to be discussed with a domain expert. One instance with very low consumption for a long duration raises particular questions (Figure 7).

Figure 6 3-sigma rule applied to normalised data

Figure 7 Very low consumption for its duration

Advanced data exploration

As previously stated, we are looking to profile asset usages in order to identify abnormal behaviour and therefore, along with duration and resource consumption, we also need to investigate the operational sensor data for each asset. This requires us to define groups of usages that share common characteristics; however, before doing so, we need to select a representative subset of data with the right sensors.

From the preliminary analysis, we observed that the number of sensors can differ between the assets and even between usages for the same asset. Therefore, for later modelling we need to exclusively select usages which always contain the same sensors, i.e. training a model requires vectors of the same length. To achieve this, we can use the following approach, as illustrated in Figure 8.

Figure 8 Selecting sensors

Each asset has a number of sensors that can differ from usage to usage, i.e. some modules can be removed or installed on the asset. Thus, we need to check the presence of these sensors across the whole dataset. Then, we select all usages with sensors that are present above a certain percentage, e.g. 95 percent, in the whole dataset. Let’s assume our dataset contains 17 sensors that are present in 95 percent of all usages. We select these sensors and discard those with lower presence percentages. This way, we create a vector of sensors of length 17. Since we decided to include sensors if they are 95 percent present, a limited number of usages may still be selected although they do not contain some of the selected sensors, i.e. you introduce gaps which are marked in yellow in the figure. To fix these gaps, you can either discard these usages or attribute values for missing sensors. Attributing can be complex, as you need to know what these sensors mean and how they are configured. In our case, these details are anonymised and these usages are consequently discarded. You may need to lower your presence percentage criteria in order to keep a sufficiently representative dataset for further analysis.

After the optimal subset is selected, we check the correlation of the remaining sensors. We do this because we want to remove redundant information and to simplify and speed up our calculations. Plotting a heatmap is a good way of visualising correlation. We do this for the remaining sensors as shown in Figure 9.

Figure 9 Sensor correlation heatmap

In our case, we have 17 sensors from which we select only 7 uncorrelated sensors and plot a scatter matrix, a second visualisation technique which allows us to view more details on the data. Refer to Figure 10.

Figure 10 Scatterplot matrix of uncorrelated sensors

Based on the selected sensors, we now try to characterise different usages for each asset, i.e. we can group usages across the assets based on their sensor values and, in this way, derive a profile for each group. To do this, we first apply hierarchical clustering to group the usages and plot the resulting dendrogram. Hierarchical clustering helps to identify the inner structure of the data and the dendrogram is a binary tree representation of the clustering result. Refer to Figure 11.

Figure 11 Dendrogram

On this graph, below distance 2 we see smaller clusters that are grouping ever closer to each other. Hence, we decide to split the data into 5 different clusters. You can also use silhouette analysis for selecting the best number of clusters.

In order to interpret the clustering, we also want to visualize them, but 7 sensors mean 7 dimensions and because we can’t plot in multidimensional space or it is too complex, we apply Principal Component Analysis or simply PCA in order to reduce the number of dimensions to 2. This allows us to visualize the results of clustering, which is shown on Figure 12. Good clustering means that clusters should be more or less well separated, i.e. similar colours are close to one another or not mixed too much with other colours, and this is what we also see in the figure.

Figure 12 PCA plot

After the clustering is complete, we can characterise usages. This can be done using different strategies. The simple method consists in taking the mean of the sensor values for each cluster (i.e. we calculate a centroid) to define a representative usage.

The last step involves validating the clusters. We can cross-check clustering with the consumption/duration of usages. For instance, we may expect all outliers to fall within one specific cluster, or expect some other more or less obvious patterns, hence rendering our clusters meaningful. In Figure 13 below, we can observe that the 5 clusters, i.e. 5 types of usages, correspond, to an extent but not entirely, to consumption/duration behaviour. We can see purple spots at the bottom and green spots at the top.

 

Figure 13 Relationship between clusters and consumption/duration

Conclusion

At this stage, some interesting outliers were detected in consumption/duration relationships, which can be stressed with the objectives the assets were used for. We have found clusters that represent typical usages according to data. Result validation can be improved by integrating additional data, such as maintenance data, into analysis. Furthermore, results can be validated and confidently concluded by the domain experts from Ilias Solutions, the industrial partner we are supporting for their data exploitation.

 

Contact: andriy.zubaliy@sirris.be

Simulation of Wireless Signal Propagation

Wireless communications are used in many industrial maintenance scenarios and are practically the only choice for transmitting data from rotating sensors. One such example in MANTIS is the shaft-mounted torque sensor in the press machine by FAGOR ARRASATE, shown below. The sensor sends the data to the receiving antenna mounted vertically from the machine’s ceiling.

shaft-mounted torque sensor in the press machine by FAGOR ARRASATE

The wireless signal must travel from transmitting antenna to the receiving one without overly high attenuation. Furthermore, the signal can travel via different paths, such that out-of-phase components attenuate each other. This so-called multipath effect is particularly strong in industrial environments that contain many large metallic surfaces. Correct placement of the antennas is therefore crucial. A good placement can often be found experimentally through trial and error. However, in certain cases where repeatedly re-locating a receiver or transmitter is not practical, a numerical simulation of radio wave propagation can be used instead.

As an illustration of concept, we present here a simulation of antenna placement for a simplified model of a part of press machine, shown below. The orange downward-pointing arrow shows the placement of the sensor and the orientation of its transmitting antenna. The white-and-gray shaded rectangle above the sensor is the receptor plane with the result of the simulation, as will be explained below. Note that the model is enclosed in a rectangular box from all six sides, but the front and top sides are not shown here in order to be able to see inside.

simulation of antenna placement for a simplified model of a part of press machine
simulation of antenna placement for a simplified model of a part of press machine

The simulation algorithm is based on the ray tracing method-of-images enhanced by double refraction modeling. It is computationally complex but highly parallel, and has thus been adopted to run on GPUs. In our case the runtime of a single simulation is approximately one minute on a high-end gaming GPU card. The simulator itself was developed by the Jožef Stefan Institute as part of the national research project ART (Advanced Ray-Tracing Techniques in Radio Environment Characterization and Radio Localization), co-funded by the Slovenian Research Agency and XLAB.

In order to use it in MANTIS, XLAB has developed a Blender plug-in that exports the model into the proprietary simulator format, and a similar import plug-in to import back the simulation results. We then ran a series of experiments simulating the rotation of the shaft, thereby changing both the position and the orientation of the transmitter antenna. The signal wavelength was 0.1225 m, corresponding to Bluetooth/WiFi frequency range. The video below shows the result. The color scale is from 0 dB loss (black) to 100 dB (white). However, values over 90 dB are replaced by red to highlight the areas that will most probably not have acceptable reception with common BLE or WiFi antenna setups.

 

Clearly visible are the vertical belts resulting from the obstruction by and reflections from the shafts and the diagonal patterns of reflections from the slanted parts in the model. Most importantly, within the belts of good reception we can see strong multipath interference patterns. Some of the individual, isolated red and white points are artifacts of the simulation where incidentally no ray has reached that exact area. These artifacts could be reduced by increasing the number of cast rays, which would also slow down the simulation considerably. Finally, it has to be noted that this experiment was only intended as an illustration of concept and no validation or comparison to actual signal measurement in the field was performed yet.

Human-machine interaction in MANTIS project

Proactive, collaborative and context-aware HMI

One of the objectives of the MANTIS project is to design and develop the human-machine interface (HMI) to deal with the intelligent optimisation of the production processes through the monitoring and management of its components. MANTIS HMI should allow intelligent, context-aware human-machine interaction by providing the right information, in the right modality and in the best way for users when needed. To achieve this goal, the user interface should be highly personalised and adapted to each specific user or user role. Since MANTIS comprises eleven distinct use cases, the design of such HMI presents a great challenge. Any unification of the HMI design may impose the constraints that could result in the HMI with a poor usability.

Our approach, therefore, focuses on the requirements that are common to most of the use cases and are specific for proactive and collaborative maintenance. A generic MANTIS HMI was specified to the extent that does not introduce any constraints for the use cases, but at the same time describes the most important features of the MANTIS HMI that should be considered when designing the HMI in individual use cases.

MANTIS HMI specifications are the result of refinement of usage scenarios provided by the industrial partners, taking the general requirements of MANTIS platform into account. Functional specifications describe the HMI functionalities, present in most use cases and abstracted from the specific situation of every single use case.

We describe a generic static model that can be used together with the requirement specifications of each individual use case to formalize the structure of the target HMI implementation. The model has been conceived, in particular, with two ideas in mind: (i) to provide means that would help to identify the HMI content elements and their relationships of a given use case and (ii) to unify (as much as possible) the HMI structure of different use cases, which is useful for comparison of implementations and exchange of good practices. When setting up the model structure we follow the concepts of descriptive models applied in task analysis and add specifics of MANTIS, denoted as MANTIS high-level tasks. For each of these high-level tasks, we provided a list of functionalities supporting them.

MANTIS human-machine interaction comprises five main aspects:

  • User interfaces;
  • Users;
  • MANTIS platform;
  • Production assets; and
  • Environment.

Through their user interfaces, several different users within the use case communicate with MANTIS platform, which in term communicates with production assets. Interaction can take place in both directions. Users can not only access the information, retrieved from production assets and stored on the platform but provide an input to the MANTIS system as well. They can initiate an operation which is then carried out by the platform, such as rescheduling maintenance task, or respond to a system triggered operation, for example, alarms. On the other hand, through the MANTIS platform, users can also communicate among themselves. In addition to the straightforward communication in terms of the textual or video chat functions, the users can also communicate via established workflows.

The last but not least main part of the interaction is also the environment. Although it can be treated neither as a direct link between the user and the system nor as a part of communication among the users, the environment can influence the human-machine interaction through the context-aware functionalities.

From the users’ point of view, the human-machine interaction within the MANTIS system supports five main high-level user tasks associated with proactive and collaborative maintenance:

  • Monitoring production assets;
  • Data analysis;
  • Maintenance tasks scheduling;
  • Reporting; and
  • Communication.

While monitoring production assets, data analysis and maintenance task scheduling are vital for proactive maintenance, reporting and communication enable collaboration among different user roles. Each of these tasks is carried out by a number of MANTIS specific functionalities that can be classified as user input, system output, user- or system- triggered operation. These functionalities should cover all the main aspects of MANTIS human-machine interaction and should also be general enough to be applicable to any MANTIS as well as potential future use case.

MANTIS HMI demonstrator

At the MANTIS meeting in Helsinki, the first version of the web based HMI demonstrator, developed in with Angular Dashboard Framework and other Javascript libraries by XLAB, was presented to the MANTIS consortium. Currently it is connected to the MIMOSA database and is demonstrating live data from the FORTUM use case. The HMI is designed as a customizable, user-dependent, responsive multi-widget dashboard, comprising basic read-only widgets, such as graphs and tables. The features of the demonstrator follow the HMI functional specifications and it is designed in the way that can be applied to any use case with MIMOSA database.

The first version of the web-based HMI demonstrator
The first version of the web-based HMI demonstrator

In the near future, many other features will be implemented, including more widget types, dashboard navigation, search function and sharing of data views. Some context-aware features, such as hidden widgets that appear when needed and suggestions of further user actions based on the usage history, will be implemented as well. In addition, general visual design recommendations such as colours, fonts and widgets positioning, described earlier in the project, will be applied.

Helsinki Consortium Meeting in May 2017, and the Conventional Energy Production Use-Case

The second (sixth overall) full consortium meeting of 2017 was held between 8th and 10th of May. This time it was hosted by VTT at their new Center for Nuclear Safety located in Espoo, Finland. The three day event gathered 65 participants from all of the participating countries. The program was more technologically oriented and contained a long open space session, where partners could present their work within the project. The tight program allowed some time to enjoy the wonderful Finnish spring weather.

The wonderful Finnish spring
The wonderful Finnish spring

Finnish use-case was prominently on display at the Open Space session at the MANTIS consortium meeting. The floor in the first open space room was dedicated to the Finnish use case. Presented were Nome, Wapice, Fortum, VTT and Lapland University of Applied Sciences (UAS). Each partner presented their work done in the Finnish use case. Wapice and Fortum presented their HMIs (IoT Ticket and TOPi respectively). Nome and VTT presented their measurement systems (NMAS and the affordable sensor research respectively) and finally Lapland UAS provided the database and REST interface that allows each partner to share and access data beyond organizational boundaries. The second room had most use cases represented. Of note was XLABs common MANTIS user interface demo that can be connected to the Finnish use case platform.

Open space session at Helsinki Consortium meeting
Open space session at Helsinki Consortium meeting

The Finnish use case is centered on a flue gas recirculation blower located in Fortum’s Järvenpää power plant. The blower is classified as a critical component in the energy production process and is monitored closely. In this use case Wapice, Nome and VTT have all provided their own sensors or virtual sensors to monitor the performance and condition of the blower. In addition, Lapland UAS has a few Wzzard sensors, made by B+B Electronics/Advantech, provide some additional measurement data bulk. However these are not related to the Järvenpää case. This measurement data is stored, using the REST interface developed by Lapland UAS, in the MANTIS database that is based on the MIMOSA data model.

Flue gas recirculation blower in Fortum’s TOPi Proview browser
Flue gas recirculation blower in Fortum’s TOPi Proview browser

The REST interface and MIMOSA database mapper provides a simple an interface, which is both easy to use and to integrate, between different applications and systems. It provides basic CRUD –functionalities and contains a mapper that maps measurement system specific data formats and structures into MIMOSA compliant data structures to ensure interoperability and compatibility with the MIMOSA data model. It is widely in use in the Finnish use case and research partners from both Slovenia and Hungary have shown interest towards utilizing MIMOSA in their use case.

A diagram of the Finnish use case
A diagram of the Finnish use case

SmartG presentation in Hannover Messe

Goizper and IK4-TEKNIKER will be present at the Hannover Messe from 24 to 28 April 2017‎ presenting Smart G, a data acquisition module for clutch-brake monitoring.

Clutch-brake systems produced by Goizper are key components in cutting, forming, folding and press machines.

The aim of this presentation is to show how incorporation of the Smart G module can convert a clutch-brake system into a monitorable smart component, which includes self-diagnostics capabilities that can provide information about the current state of the component and predict failures before they occur.

Communications modules incorporated in the Smart G component provides capabilities to:

  • Remotely monitor the component
  • Send the data to a cloud platform where all historic data are stored.

Having the data of Goizper’s clutch-brakes fleet on a cloud platform will provide the possibility to use more advanced techniques and algorithms in order to predict failures and/or remaining useful life of key components of the system.

Benefits are two-fold: Goizper will drastically improve the knowledge about their equipment to improve reliability of their products and the maintenance services provided to customers, while customers will benefit from a reduced downtime of the machines and more cost-effective maintenance strategy.

Goizper and Tekniker’s work on failure prediction and diagnosis, as well as cloud platform development, have received funding from the European Union under the MANTIS project.

 

Smart G concept block diagram
Figure 1. Smart G concept block diagram.

 

SmartG-status
Figure 2. General status of the machine.

 

SmartG-breaking

SmartG-clutching

Figures 3 and 4. Braking and Clutching processes performance

SmartG-alarms
Figure 5. Active alarms and alarms history

 

SmartG-productA
Figure 6. Product pictures
SmartG-procuctB

Classifying tool images to enhance predictive maintenance

Introduction

Philips Consumer Lifestyle (PCL) is an advanced manufacturing site located in Drachten, the Netherlands. Our organization falls within the Personal Heath business cluster of Philips, and is primarily concerned with the manufacturing of personal electric shavers.

Electric shavers are comprised of two principle component ‘blocks’: a body and a shaving unit. Each shaving unit contains three metallic shaving ‘heads’, which in turn are composed of a shaving blade (the cutting element) and shaver cap (the guard). The focus of the MANTIS project at PCL falls on the production of these shaver caps.

Philips Shaver and components
Philips Shaver and components

An electro-chemical process is used in the manufacturing of shaver caps, where an electric current is passed over the raw input material, which is conductive, in order to cut this material into the desired shape. Production of the shaver caps at PCL is fully automated.

Production Line
Production Line

Precision tooling is required throughout the various stages of shaver cap manufacturing. At present, these tools are built on-site, and are required to be kept in stock so that replacements are available in the event of tooling malfunctions. Having functional tools available around the clock is essential to meet our goal of 100% ‘up-time’ for our assembly lines. However, this is an expensive approach to resolve the problem, both in terms of the additional equipment required and extensive down-time that results from manual tooling replacements. Therefore, the timely maintenance of these tools presents a challenge.

Tool maintenance

Currently the maintenance strategy on the production line for shaver caps is a mixture of reactive and preventive maintenance. In line with the Mantis goal, our goal is to transform this towards a predictive or even a prescriptive maintenance strategy. However, this comes with the need for data. In order to perform maintenance on the tooling at exactly the right moment needed, information is necessary about the tooling to make useful decisions.

The data directly related to the current state of tooling (e.g. degree of wear, damages, etc.) is hard to retrieve in some cases, due to process-specific reasons. In our use case the tooling is delicate and very precise (micron range, difficult geometries), which makes frequent measurements of the tooling difficult and expensive in a mass production environment. Currently, there is only indirect data available about the use of the tools in the production machines, but not about the actual state of the tool itself. These data can be used to estimate, for example, the remaining useful life of a tool, but in order to improve and verify the RUL prediction models, more direct data is necessary.

Tool wear sensor

To solve this matter, a collaboration between the University of Groningen and Philips Consumer Lifestyle has been started in context of the Mantis consortium, with the goal to develop a tool wear sensor based on an optical image system.  A robust setup with a high-resolution sensor will make detailed images of the individual tools.

Tool images and labelling system
Tool images and labelling system

The raw images are preprocessed, where the parts of interest of the tool will be cut out of the image and rotated to form the input for a machine learning algorithm. Next step would be to normalize the pictures so they are more or less comparable.

Since we have no baseline, we asked our maintenance engineers (they are the domain experts) to label all these individual images. Together we choose three specific labels: wear, damage and contamination. The input of the maintenance engineers is used to train the algorithm, but also to assess how well these individual pictures are labelled similar when considering multiple engineers.

Currently, over 1500 pictures are labelled in about a month. Initial results seem to indicate that simple machine learning can outperform human labeling regarding tooling deviations.

If results are good, the trained algorithm will ultimately be used with an automatic calculation engine to run new images through the algorithm. This means that we also have to change the way of work, and provide the maintenance engineers with easy-to-use tools to make these new images, as part of their regular maintenance steps. The outcome of the analysis forms an input for determining the remaining useful life of the tool, in combination with both process and quality data.

Technical workshops of the MANTIS project in Madrid

Nearly 70 participants were able to attend to the three-day MANTIS meeting organized by ACCIONA Construcción S.A. in their premises in Madrid, Spain from the 18th to 20th of January 2017.

The agenda of the meeting was designed keeping in mind the idea of having less informative sessions but more interactive ones to really get fruitful discussions and making decisions for further steps.

Technical workshops of the MANTIS project in Madrid
Technical workshops of the MANTIS project in Madrid

The meeting started with a session chaired by the Project Coordinator to let everybody get a precise idea about the status of the project. Then, most of all the use cases presented the last developments, focusing in the data availability and analytics to be used. Following this session, the Open Space took place where several posters were shown and discussed in small groups. In the afternoon, the first parallels sessions started, covering WP3 and WP5.

The second day was very intense. At the beginning, WP3 and WP5 finalized the discussions that started the day before. Then, it was the turn of WP2 and WP4. Regarding the latter, it is worth to say that there were several sessions aiming very specific technical aspects. In the evening, a joint dinner was organized in a very famous place in Madrid, where an impressive flamenco show was performed.

Technical workshops of the MANTIS project in Madrid
Technical workshops of the MANTIS project in Madrid

On the last day of the meeting, before the conclusion and wrap-up session, WP2 members kept discussing while WP8 session was running in parallel. Finally, the EB meeting took place.

PVTECH publishes 3E’s article on data mining for automatic fault detection and diagnosis from photovoltaic monitoring data

The continuous and systematic analysis of performance data from the monitoring of operational PV power plants is vital to improving the management and thus profitability of those plants over their lifetime. The article draws on an extensive programme undertaken by 3E to assess the performance of a portfolio of European PV power plants it monitors. The article illustrates 3E’s approach to automatic fault detection. It will explore the various data mining methodologies used to gain an accurate understanding of the performance of large-scale PV systems and how that intelligence can be put to the best use for the optimal management of solar assets.

3E’s work on automatic fault detection and diagnosis has received funding from the European Union under the MANTIS project.

Please click here to read the full article.

Modern Internet of Things technologies in industrial condition monitoring

Introduction

Wapice is a Finnish company specialized in providing software and hardware solutions for industrial companies for wide variety of purposes. We have developed remote management and condition monitoring solutions since beginning and our knowledge of this business domain has evolved into our own Internet of Things platform called IoT-Ticket. Today IoT-Ticket is a complete industrial IoT suite that includes everything required from acquiring the data to visualizing and analyzing the asset lifetime critical information.

Why condition monitoring

In predictive maintenance the goal is to prevent unexpected equipment failures by scheduling maintenance actions optimally. When successful, it is possible to reduce unplanned stops in equipment operational usage and save money through higher utilization rate, extended equipment lifetime and personnel and spare part costs. Succeeding in this task requires deep understanding of the asset behaviour, statistical information of the equipment usage and knowledge about the wear models combined with measurements that reveal the equipment current state of health. Earlier these measurements were carried out periodically by trained experts with special equipment, but now the modern IoT technologies are making it possible to gather real time information about the field devices all the time (i.e. condition monitoring). While this increases the availability of data, it creates another challenge: How to process massive amounts of data so that right information is found in the right time. In condition monitoring process the gathered data should optimally be processed so that amount of data transferred towards the uplink decreases while the understanding of the data increases.

Correct architecture leads to process, where only relevant information is traversing uplink in condition monitoring chain
Correct architecture leads to process, where only relevant information is traversing uplink in condition monitoring chain

This article describes how modern IoT technologies help in condition monitoring related processes and how data aggregation solutions makes it possible to share condition monitoring related information between different vendors. This further improves operational efficiency by enabling real time condition monitoring not only in asset level but also in plant or fleet level, where service operators must understand the behaviour and available lifetime of assets coming from different manufacturers.

WRM247+ data collector for edge analysis

The first link in the condition monitoring chain is the hardware and sensors. In order to measure the physical phenomena behind the wear of the asset a set of sensors is required to sample and capture the data from monitored devices. This data must be buffered locally, pre-processed and finally only the crucial information must be transferred to the server, where the physical phenomena can be identified from the signals. Depending on application area a different types and models of sensors are required to capture just the relevant information. Also depending on the physical phenomena a different kind of analysis methods are required. Due to these reasons the various measurement systems have so far been custom tailored according to the target. This approach of course works, but designing custom tailored measurement systems is time consuming and expensive. Our approach to overcome these problems has been to implement such IoT building blocks that adapt into wide variety of purposes and can be easily and flexibly taken into use. One of the cornerstones in our system is the flexibility and user friendliness.

In the hardware side our IoT platform offers several approaches. WRM247+ measurement and communication device is our hardware reference design that allows connecting a wide variety of industrial sensors using either wired or wireless communication methods and also provides local buffering and pre-processing of data as well as communication to the server. Examples of supported standard protocols are CAN, CANOpen, Modbus, ModbusTCP, 1-Wire and digital/analog IOs. This device is an excellent starting point for most common industrial measurement purposes.

WRM247+ Measurement and communication gateway
WRM247+ Measurement and communication gateway

In Mantis project Wapice has been investigating the interoperability of the wireless and wired sensors. In use case 3.3. Conventional Energy Production we will demonstrate the fusion of the wireless Bluetooth Low Energy technology and wired high accuracy vibration measurements. In order to achieve this we have built a support for connecting IEPE standard vibration sensors to the WRM247+ device. The device supports any industrial IEPE standard sensor, which makes it possible to select a suitable sensor according to the application area. Additionally we have also built a support for connecting a network of BLE sensors to the device. In use case the purpose of this arrangement is to gather temperature information around the flue-gas circulation blower using the wireless BLE sensors and perform vibration measurements from the rolling bearing. The temperature measurements reveal possible problems e.g. in lubrication of the bearing and possibly allow actions to be taken before a catastrophic failure happens.

In case WRM247+ device is not suitable for purpose, it is possible to integrate custom devices easily into IoT-Ticket using the REST API available. For this purpose we provide full documentation and free developer libraries for several programming languages. The list of available libraries includes for example C/C++, Python, Qt, Java and C#. Other integration methods include for example OPC or OPC UA and Libelium sensor platform, that supports e.g. wireless LoRa sensors. In addition Wapice has a long experience in designing machine-to-machine (M2M) solutions including PCB layout, embedded software design and protocol implementation, so we also offer you the possibility to get tailored Internet of Things hardware or embedded software that fully suit your needs.

 

Figure describes the interface options in IoT-Ticket architecture
Figure describes the interface options in IoT-Ticket architecture

 

Iot-Ticket portal for Back-End tools

In the back-end side IoT-Ticket provides all necessary tools for visualizing and analyzing data. Our tools are web based and require no installation: Simply login, create and explore!

The Dashboard allows users to interact securely with remote devices, check their status, view reports or get status updates on the current operational performance. It can be utilized in various scenarios – e.g. vehicle tracking, real time plant or machinery monitoring and control. As many Dashboard Pages are available the user can switch between different contexts and drill into information starting from enterprise level to sites, assets and data nodes. The Dashboard also includes two powerful tools for content creation: Interface Designer and Dataflow Editor.

Using the Interface Designer user can raw new elements or add images, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the Dashboard. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them.

The Dataflow editor is an IEC 61131-3 inspired, web-based, graphical block programming editor that seamlessly integrates to the Interface Designer. A user can design the dataflow by connecting function blocks to implement complex logic operations which then can be used to execute control actions or routed to user interface elements for monitoring purposes.

In Use Case 3.3. Conventional Energy Production Wapice – together with Finnish partners – demonstrates Cloud-to-Cloud integration in Mantis platform using the IoT-Ticket platform tools. In this use case LapinAMK and VTT have jointly setup a Microsoft Azure based MIMOSA data aggregation database. The plan is to share condition monitoring related KPI information through MIMOSA database, which allows sharing data through REST API. Devices may either push data directly to MIMOSA or through local clouds.

Data sharing and aggregation in Mantis platform using Mimosa aggregation database
Data sharing and aggregation in Mantis platform using Mimosa aggregation database

 

IoT-Ticket allows communication to REST sources using Interface Designer graphical flow programming tools. Getting data from REST sources is done by simply creating a background server flow that contains a trigger and REST block. The REST block is configured with username and password that allow authentication to REST source, source URL and REST method that contains XML/JSON payload. From the REST response the data value is parsed and output to data charts or forwarded to further processing. Additionally virtual data tags allow forwarding the data into IoT-Ticket system. By configuring the flow to be run in server mode, the flow is run silently in background all the time. The operation interval is configured using the timer block, which fires the REST block in certain intervals. An example video below shows how Cloud-to-Cloud communication between MIMOSA and IoT-Ticket is established in Use Case 3.3. In this example video sinusoidal test data originates from LapinAMK enterprise and tachometer RPM reading from system under test using WRM247+ device.

The reporting and analytics tools add on to the platform features. The report editor integrates seamlessly into the Dashboard and offers a user the possibility to create or modify content. The user can draw new elements or add image, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the report. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them. Analytics tool is also integrated to the Dashboard and supports you in understanding your data better.

Benefits of IoT in condition monitoring

Typically condition monitoring related data has been scattered around in separate information systems and it has been very hard or even impossible to create a single view of the all relevant data or correlate between information scattered in different databases. MIMOSA is an information exchange standard that allows classifying and sharing condition monitoring related data between enterprise systems. It answers the data sharing problematics by allowing aggregation of crucial information into single location in a uniform and understandable format. When interfaced with modern REST based information sharing technologies that utilize for example JSON or XML based messaging it is surprisingly easy to collect and share crucial information using single aggregation database. When accompanied with modern web based Industrial IoT tools it is then easy to visualize data, create reports or perform further analysis using only the crucial information available.

In this blog posting I have highlighted some examples how Industrial IoT building blocks help you to gather relevant condition monitoring information, create integrations between data sources and aggregate business relevant information into single location. Focusing on the crucial information allows you to understand better your assets and predict the maintenance needs. This is needed, when optimizing the value of your business!

See more examples in our web pages (www.iot-ticket.com) or YouTube channel https://www.youtube.com/channel/UCJt9c3edgH7cQdSYYIH0YbQ/videos

 

 

MANTIS for Compressor maintenance

With SMARTLINK monitoring program, Atlas Copco makes use of connectivity data and data intelligence to help customer to keep up their production uptime and to improve, when possible, energy efficiency.

With approximately 100,000 machines connected with SMARTLINK, Atlas Copco makes compressors in the field communicate directly with the back office and their service technicians.

Atlas Copco’s SMARTLINK technology allows for remote monitoring of compressors in the fields.
Atlas Copco’s SMARTLINK technology allows for remote monitoring of compressors in the fields.

Customers become more proactive, planning is more efficient and reliability of the compressed air installations is better than ever before.

Customers of SMARTLINK get a monthly overview of machine information, including running hours and the time left before service, thus allowing them to order a service visit at the right time, maintaining maximum uptime and energy efficiency.

With SMARTLINK they can closely follow up on machine warnings via email or SMS. With this information they can take the necessary actions to prevent a breakdown.

With the MANTIS project, Atlas Copco will take proactive maintenance to the next level, by:

  • predicting the remaining useful life of consumables and components that are subject to wear
  • detecting upcoming problems or inefficiencies before they deteriorate
  • remotely diagnosing the root cause of an unplanned shutdown
During the MANTIS project, Atlas Copco will take proactive maintenance to the next level by predicting component lifetime, by detecting anomaly and by perform remote diagnosis of the compressed air installation.
During the MANTIS project, Atlas Copco will take proactive maintenance to the next level by predicting component lifetime, by detecting anomaly and by perform remote diagnosis of the compressed air installation.

Moreover, in order to reduce communication costs, smart sensing technology is being investigated, or how local preprocessing of information can significantly reduce the amount of data to be transmitted.

A major challenge for Atlas Copco is the huge variety of compressor types and operating conditions. To process this enormous amount of information, self-learning techniques are combined with physics-based compressor models. Eventually, these will enable the discovery of new patterns in data, collected on a worldwide scale.

The ultimate goal is to translate these data into actionable information for the global service network.

Service interventions will be planned even better and will be shorter and more efficient. Problems will be fixed in one visit, as technicians will know in advance what to do and what parts to bring.

The results of the project will allow better service planning, shorter visits and first time fix, thus reducing downtime for the end customers and ensuring sustainable productivity
The results of the project will allow better service planning, shorter visits and first time fix, thus reducing downtime for the end customers and ensuring sustainable productivity

For the customer, this means no unnecessary maintenance, less planned or unplanned downtime, therefore achieving maximum productivity.