Blog

MANTIS: Conventional Energy Production Use-Case

The Finnish use-case under the MANTIS project concentrates on proactive maintenance solutions in the field of conventional energy production. The industry is moving towards smaller distributed plants with less on-site staff and thus, the ability to deploy conventional CBM strategies has declined. However, availability is still a major factor in power generation efficiency and plant feasibility. Therefore, new kind of energy production asset maintenance solutions applicable also for less critical components are required.

Five industrial and academic partners, namely Fortum, Lapland University of Applied Sciences (LUAS), Nome, VTT and Wapice, form the Finnish consortium in the MANTIS project. The Finnish use-case of conventional energy production is centered on a flue gas blower in Fortum’s Järvenpää power plant. Power plants have a large array of rotating machinery, whose reliability greatly affect on the overall reliability of the plant. As such, the blower offers a valid testing environment for collaborative maintenance solutions developed by the Finnish partners. The blower has been instrumented with vibration sensors, virtual sensors and local data collectors provided by Nome, Wapice and VTT. The measurement data is stored in the MIMOSA data model based MANTIS database via REST interface developed by LUAS. The collected data can be distributed to individual systems across organizational boundaries for analysis purposes. The partners of the conventional energy production use-case have integrated their own analytic tools, such as Fortum’s TOPi, Nome’s NMAS and Wapice’s IoT-Ticket, to the MANTIS database, as illustrated in figure 1, and tested the system architecture successfully in practice.

Pilot structure of the conventional energy production use-case
Pilot structure of the conventional energy production use-case

The MANTIS project has offered a great opportunity for the conventional energy production use-case partners to develop their own HMIs that can be integrated to different fields of proactive maintenance. The development work continues in the third and last phase of the MANTIS project, as some advanced visualization approaches, including virtual reality and augmented reality applications, are piloted and integrated to the HMIs. The piloted cloud architecture from Fortum’s Järvenpää power plant will also be tested in larger scale in another entire power plant. The data collection will be extended to cover a wider range of equipment and process variables to enable plant-wide monitoring of assets and proactive maintenance strategies. In addition, the partners are developing their analytic tools further to  provide solutions capable of diagnostics and prognostics required in advanced maintenance.

 

MANTIS Consortium Meeting at SIRRIS (Ghent, Belgium)

As we do very 4 months, we had a new consortium meeting in January. This time we met at the beautiful Ghent city, and were fantastically hosted by our partner SIRRIS.

Mantis Meeting participants
Mantis Meeting participants

We are approaching to the end of the project, and thus, decided not to make more parallel sessions, so everyone would perfectly be aware of the activities of all Work Packages. Also, the Open Sessions, where we always showroom our last developments in an interactive way, showed no posters but plenty of live demos.

 

MANTIS-OpenSpace
MANTIS-OpenSpace
Open Session - MANTIS
Open Session – MANTIS

Of course, we continue working hard till the end of April!

MANTIS-Meeting
MANTIS-Meeting

Next, and last meeting, in Budapest, hosted by BMU & AITIA!

 

Clustering machines based on event logs within the MANTIS project

Introduction

Liebherr participates in the MANTIS project as an industrial partner with the division Liebherr hydraulic excavators. As expected, the main expertise of Liebherr consists in developing and optimizing excavators under consideration of different information sources. However, after the delivery of the excavator to the customer, every excavator generates event respectively message data automatically, which are actually mainly used for fault diagnostics but not extensively for further investigations.

This event data logger records among other things basically:

  • timestamp, when an event occurs
  • type of event, e.g. info, warning or error
  • unique message identifier of this event class

In combination with anonymized data concerning service partner and customer the following questions are relevant:

  • Is there a relation between the message patterns and the corresponding anonymized service partner?
  • Is there a relation between the message patterns and the anonymized customer?

Analysis approach for clustering

The related analysis was performed by the University of Groningen (RUG) as a research partner within the MANTIS project by considering each excavator as a stochastic message generator. In the context of preprocessing, the different messages were first counted per excavator and afterwards normalized with the total amount of occurrence per unique message identifier.

Based on the computed message probabilities per machine a k-means clustering was performed. To overcome initialization influences the clustering was performed 100 times with random initialization. The relationship of the cluster assignment of each excavator with the corresponding service partner or customer for each ‘k’ was subsequently examined with the chi-square test. The average estimate of the significance of the 100 model estimations of each ‘k’ then represented the quality function.

Results of cluster analysis

As can be seen in figure 1, there is no tendency for a relationship between the service partner and the messages per excavator. The average significance level is obviously higher than 0.05 and all of the single levels have nearly the same magnitude.

Average p-significance levels as a function of k (number of groups), for the interaction clusters versus service partner
Average p-significance levels as a function of k (number of groups), for the interaction clusters versus service partner

In contrast to figure 1, figure 2 shows a clear minimum at k=7, indicating that for this number of groups, it is likely that the distribution of machines over customers is not likely to be random. Although the p_signif – value is with 0.0588 slightly above the significance level of 0.05, the magnitude at k=7 is obviously lower than at other k-values.

 

Average p-significance levels as a function of k (number of groups) for the interaction clusters versus customers.
Average p-significance levels as a function of k (number of groups) for the interaction clusters versus customers.

In order to explain the minimum at k = 7, Liebherr decoded the anonymized customers and tried to find manually a description of the clusters. The cumbersome work did actually not yield to the expected result, namely the detection of short cluster descriptions, but rather to the recognition of customer data mismatching.

In summary the carried out analysis pointed out, that with the skillful usage of analysis algorithms superficial unmanageable data can disclose insights. But one of the basic requirements for later usage of the results is the proper preparation of data.

General methodology of asset usage profiling for proactive maintenance prediction

When analysing sensor data, you are typically confronted with different challenges relating to data quality. Here, we show you how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.

Nowadays, especially with the advent of the Internet of Things (IoT), large quantities of sensor data are collected. Small sensors can be easily installed, on multipurpose industrial vehicles for instance, in order to measure a vast range of parameters. The collected data can serve many purposes, e.g. to predict system maintenance. However, when analysing it, you are typically confronted with different challenges relating to data quality, e.g. unrealistic or missing values, outliers, correlations and other typical and a-typical obstacles. The aim of this article is to show how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.

Within the MANTIS project, Sirris is developing a general methodology that can be used to explore sensor data from a fleet of industrial assets. The main goal of the methodology is to profile asset usages, i.e. define separate groups of usages that share common characteristics. This can help experts to identify potential problems, which are not visually observable, when the resulting profiles are compared with the expected behaviour of the assets and when anomalies are detected.

In this article, we will describe the methodology of asset usage profiling for proactive maintenance prediction. The data used in this article is confidential and anonymised; we therefore cannot describe it in detail. It mainly consists of duration and resource consumption as well as a range of parameters measured via different sensors. For our analysis, we used Jupyter Notebook with appropriate libraries such as pandas, scipy and scikit-learn.

Data preparation

Sometimes data can be polluted, as it is collected from different sources and can contain duplicates, wrong values, empties and outliers, which should all be considered carefully. Therefore, the first natural step is to conduct an initial exploration of the data and to prepare a single reference dataset for advanced analysis, by cleaning the data, by means of visual and statistical methods, then by selecting the right attributes you wish to work with further.

In our example dataset, we find negative or zero-resource consumption, a situation that is obviously impossible, as shown in Figure 1. In our case, since there are few outliers of this type, we simply remove them from the dataset.

Figure 1 Zero or negative consumption

Another possible example is that of an erroneous date in the data. For example, dates may be too old compared to the rest of your dataset; future dates can even exist. Your decision to maintain, fix or remove wrong instances can depend on many factors, such as how big your dataset is, whether an erroneous date is very important at the current stage, etc. In our case, we maintain these instances since, at this moment, the date is not important for analysis and the percentage of this subset is very low.

Outliers are extreme values that deviate sufficiently from other observations and also need to be dealt with carefully. They can be detected visually and using statistical means. Sometimes we can simply remove them, sometimes we want to analyse them thoroughly. Visualising the data directly reveals some potential outliers; refer to the point in the upper right-hand corner in Figure 2. In our case, such high values for duration and consumption are impossible, as shown in Figure 3. Since it is the first record for this type of asset, it may have been entered manually for test purposes; we consequently choose to remove it.

Figure 2 Visual check for outliers

Figure 3 Impossible data

In Figure 4, we can see a positive linear correlation between consumption and duration, which is to be expected, although we still may find some outliers using the 3-sigma rule. This rule states that, for the normal distribution, approximately 99.7 percent of observations lie within 3 standard deviations of the mean. Then, based on Chebyshev’s Inequality, even in the case of non-normally distributed data, at least 88.8 percent of cases fall within 3-sigma intervals. Thus, we consider observations beyond 3-sigmas as outliers.

Figure 4 Data after cleaning

In Figure 5, we see that our data is quite normal, centred around 0, most values lying between -2 and 2. This means that the 3-sigma rule will show us more accurate results. You must normalise your data before applying this rule.

Figure 5 Distribution of normalised consumption/s

Results are shown in Figure 6. The reason for such a significant deviation from the average in consumption and duration of certain usages is to be discussed with a domain expert. One instance with very low consumption for a long duration raises particular questions (Figure 7).

Figure 6 3-sigma rule applied to normalised data

Figure 7 Very low consumption for its duration

Advanced data exploration

As previously stated, we are looking to profile asset usages in order to identify abnormal behaviour and therefore, along with duration and resource consumption, we also need to investigate the operational sensor data for each asset. This requires us to define groups of usages that share common characteristics; however, before doing so, we need to select a representative subset of data with the right sensors.

From the preliminary analysis, we observed that the number of sensors can differ between the assets and even between usages for the same asset. Therefore, for later modelling we need to exclusively select usages which always contain the same sensors, i.e. training a model requires vectors of the same length. To achieve this, we can use the following approach, as illustrated in Figure 8.

Figure 8 Selecting sensors

Each asset has a number of sensors that can differ from usage to usage, i.e. some modules can be removed or installed on the asset. Thus, we need to check the presence of these sensors across the whole dataset. Then, we select all usages with sensors that are present above a certain percentage, e.g. 95 percent, in the whole dataset. Let’s assume our dataset contains 17 sensors that are present in 95 percent of all usages. We select these sensors and discard those with lower presence percentages. This way, we create a vector of sensors of length 17. Since we decided to include sensors if they are 95 percent present, a limited number of usages may still be selected although they do not contain some of the selected sensors, i.e. you introduce gaps which are marked in yellow in the figure. To fix these gaps, you can either discard these usages or attribute values for missing sensors. Attributing can be complex, as you need to know what these sensors mean and how they are configured. In our case, these details are anonymised and these usages are consequently discarded. You may need to lower your presence percentage criteria in order to keep a sufficiently representative dataset for further analysis.

After the optimal subset is selected, we check the correlation of the remaining sensors. We do this because we want to remove redundant information and to simplify and speed up our calculations. Plotting a heatmap is a good way of visualising correlation. We do this for the remaining sensors as shown in Figure 9.

Figure 9 Sensor correlation heatmap

In our case, we have 17 sensors from which we select only 7 uncorrelated sensors and plot a scatter matrix, a second visualisation technique which allows us to view more details on the data. Refer to Figure 10.

Figure 10 Scatterplot matrix of uncorrelated sensors

Based on the selected sensors, we now try to characterise different usages for each asset, i.e. we can group usages across the assets based on their sensor values and, in this way, derive a profile for each group. To do this, we first apply hierarchical clustering to group the usages and plot the resulting dendrogram. Hierarchical clustering helps to identify the inner structure of the data and the dendrogram is a binary tree representation of the clustering result. Refer to Figure 11.

Figure 11 Dendrogram

On this graph, below distance 2 we see smaller clusters that are grouping ever closer to each other. Hence, we decide to split the data into 5 different clusters. You can also use silhouette analysis for selecting the best number of clusters.

In order to interpret the clustering, we also want to visualize them, but 7 sensors mean 7 dimensions and because we can’t plot in multidimensional space or it is too complex, we apply Principal Component Analysis or simply PCA in order to reduce the number of dimensions to 2. This allows us to visualize the results of clustering, which is shown on Figure 12. Good clustering means that clusters should be more or less well separated, i.e. similar colours are close to one another or not mixed too much with other colours, and this is what we also see in the figure.

Figure 12 PCA plot

After the clustering is complete, we can characterise usages. This can be done using different strategies. The simple method consists in taking the mean of the sensor values for each cluster (i.e. we calculate a centroid) to define a representative usage.

The last step involves validating the clusters. We can cross-check clustering with the consumption/duration of usages. For instance, we may expect all outliers to fall within one specific cluster, or expect some other more or less obvious patterns, hence rendering our clusters meaningful. In Figure 13 below, we can observe that the 5 clusters, i.e. 5 types of usages, correspond, to an extent but not entirely, to consumption/duration behaviour. We can see purple spots at the bottom and green spots at the top.

 

Figure 13 Relationship between clusters and consumption/duration

Conclusion

At this stage, some interesting outliers were detected in consumption/duration relationships, which can be stressed with the objectives the assets were used for. We have found clusters that represent typical usages according to data. Result validation can be improved by integrating additional data, such as maintenance data, into analysis. Furthermore, results can be validated and confidently concluded by the domain experts from Ilias Solutions, the industrial partner we are supporting for their data exploitation.

 

Contact: andriy.zubaliy@sirris.be

Providing reliability of components to customers with the MANTIS Maintenance Architecture

Goizper S. Coop., smart components’ manufacturer

Goizper S. Coop’s products are mechanical components (clutch brakes, gear boxes, indexing units…) installed within different kind of productive machines. These machines are designed to produce continuously and unplanned downtimes generate high costs. These components are the key part of some of the mentioned productive machines and the relevant component’s health influences directly within the machine status.

Clutch Brake Component located within a mechanical press machine

Breakdown of Components

Furthermore, if one of these components fails, it takes a long time while a new one is sent to the customer’s facilities, removed the old one and set up the new one. In these cases, the production asset maintenance means lot of expenses for customers and suppliers.

MANTIS for predictive maintenance

MANTIS platform provides an online and future view of these components’ health. Smart sensors installed at the mechanical component are connected to Monitoring and Alerting, which is performed automatically, within the smart-G box located next to the mechanical component. Then, this Big Data is processed in the Cloud and through different Maintenance data analytics the status and future trend of the component health is obtained as an output.

Smart-G box and rotary union with sensors

Obviously, the introduction of this Cyber Physical System will not eliminate all machine breakdowns, but it will help in order to reduce considerably machine unplanned downtimes, so that the customer and supplier will be able to plan their maintenance tasks and reduce these kind of stops.

MANTIS Collaboration

Within the MANTIS ECSEL project, Goizper has collaborated close to one of its customers, Fagor Arrasate, trying to improve the real inconveniences and reduce expenses that unplanned downtimes cause in both firms.

Detecting usage on Compact Excavator with ILIAS NVO

Introduction

Compact Excavators are often rented on an hourly or daily rate. No meters are used, which means that for billing only calendar hours or days are used. For maintenance, the system has an “engine hour” meter, but this gives indicator only when the system is running (idle or driving or operating).

The machine used in the test, a Compact Excavator

A proposal is to introduce other meters for more precise counters on the actual use of the machine. One sensor is proposed for the solution, which provides a very cheap way of getting much more usage data.

Business case

For the rental case a “power by the hour” rate could be more efficient. I.e. the end customer pays for the required usage or wear of the machinery and not just number of hours the machine is reserved. It would give a more fair pricing model, since the real cost of running the machinery is mostly due to maintenance. This would give the user an incitement of taking care of the machine while using it. It also gives a better way to estimate the need for maintenance or to balance out the usage of equipment.

For other cases, a simple sensor could give benefits of getting higher fleet availability, lowering operating costs etc. by doing the following:

  • Machine health and how to predict asset failure (predictive maintenance)
  • Prevent or detect abuse
  • Provide data for warranty models
  • Provide data for fleet management/optimization

All of the above mentioned points can be addressed with a simple and robust IMU.

Proof of concept thesis

For this proof of concept, we will provide a thesis, to test the data collection and analytic capability of such a system:

“We believe that we can measure how many hours a hammer and tracks / undercarriage has been used on a compact excavator by measuring the vibration pattern”

The hydraulic hammer mounted on the Compact Excavator

As proof of concept we want to be able to detect the following states

  • Engine Off – ID 4001
  • Idle – low RPM ID 4002
  • Idle – High RPM ID 4003
  • Driving – Turtle gear ID 4010
  • Driving – Rabbit gear ID 4011
  • Driving – Slalom ID 4012
  • Hammer – ID 4020

Other states (such as abuse or hard usage) could also be detected.

The Machine Learning approach

A single IMU sensor is installed in the frame of the vehicle. Data is collected with high resolution and high sampling frequency. Data was collected on a small embedded device in the vehicle.

Model creation data

A series of tests with beforementioned states were made. The data was labeled with each state.

After data labeling, a decision tree was created using statistical features of the data.

The decision tree can now be applied to data collected in real time, on the embedded device.

Test results

A new series of tests were made. This data was again labeled with each state. Data was collected and parsed with the decision tree generated with the model data from before (with fixed data chunk sizes).

In the figure below, the results from the algorithms can be seen.

Visualization of the test results

On the top row of bars, the data labels (the truth) are seen, colored. In the next row of bars, the detected states are colored. The bottom graph is a visualization of part of the collected raw data.

As seen, the colors match with very high precision. Only in the beginning and end of the states there are small errors. This is most likely because of the data labeling (i.e. as the labels were created manually with a stopwatch they may not be completely timely)

Test conclusion

The IMU sensors and embedded device mounted on the Compact Excavator is able to provide data for machine learning and recognition of at least 6 different usage patterns:

  • ignition
  • idle
  • slow driving
  • fast driving
  • slalom driving
  • hammering

The usage information can now be collected, and a “power by the hour” renting concept can be introduced. For example, the renting company can provide an app where the customer can specify how much hammering they want, and how much driving etc. Then a much lower price can be provided. If data is collected and transmitted through GSM, the app can even update in real time, showing usage data.

This means that the operator of the vehicle can in real time see how much usage has been spent. A warning could be provided when i.e. when 80% of the hammering hours have been spent, similar to traveling with a mobile phone abroad and there is a fixed number of Megabytes available.

Perspective

The whole setup was made within a few hours. Mounting of the system took 30 minutes, collecting model creation data took 1 hour. Creating the models took 30 minutes. And testing the system took another hour. We started in the morning, and before lunch time, everything was mounted, calibrated and validated and ready for use.

This sensor and embedded system provides a very easy way of providing actual and valid usage information on mechanical systems.

It can easily detect more states. The meters provided could also be summarized, which could be used to provide the operator with information on when it is time to replace the hammer – before it actually breaks. The time saving from this alone are enough to pay for the system.

Human-machine interaction in MANTIS project

Proactive, collaborative and context-aware HMI

One of the objectives of the MANTIS project is to design and develop the human-machine interface (HMI) to deal with the intelligent optimisation of the production processes through the monitoring and management of its components. MANTIS HMI should allow intelligent, context-aware human-machine interaction by providing the right information, in the right modality and in the best way for users when needed. To achieve this goal, the user interface should be highly personalised and adapted to each specific user or user role. Since MANTIS comprises eleven distinct use cases, the design of such HMI presents a great challenge. Any unification of the HMI design may impose the constraints that could result in the HMI with a poor usability.

Our approach, therefore, focuses on the requirements that are common to most of the use cases and are specific for proactive and collaborative maintenance. A generic MANTIS HMI was specified to the extent that does not introduce any constraints for the use cases, but at the same time describes the most important features of the MANTIS HMI that should be considered when designing the HMI in individual use cases.

MANTIS HMI specifications are the result of refinement of usage scenarios provided by the industrial partners, taking the general requirements of MANTIS platform into account. Functional specifications describe the HMI functionalities, present in most use cases and abstracted from the specific situation of every single use case.

We describe a generic static model that can be used together with the requirement specifications of each individual use case to formalize the structure of the target HMI implementation. The model has been conceived, in particular, with two ideas in mind: (i) to provide means that would help to identify the HMI content elements and their relationships of a given use case and (ii) to unify (as much as possible) the HMI structure of different use cases, which is useful for comparison of implementations and exchange of good practices. When setting up the model structure we follow the concepts of descriptive models applied in task analysis and add specifics of MANTIS, denoted as MANTIS high-level tasks. For each of these high-level tasks, we provided a list of functionalities supporting them.

MANTIS human-machine interaction comprises five main aspects:

  • User interfaces;
  • Users;
  • MANTIS platform;
  • Production assets; and
  • Environment.

Through their user interfaces, several different users within the use case communicate with MANTIS platform, which in term communicates with production assets. Interaction can take place in both directions. Users can not only access the information, retrieved from production assets and stored on the platform but provide an input to the MANTIS system as well. They can initiate an operation which is then carried out by the platform, such as rescheduling maintenance task, or respond to a system triggered operation, for example, alarms. On the other hand, through the MANTIS platform, users can also communicate among themselves. In addition to the straightforward communication in terms of the textual or video chat functions, the users can also communicate via established workflows.

The last but not least main part of the interaction is also the environment. Although it can be treated neither as a direct link between the user and the system nor as a part of communication among the users, the environment can influence the human-machine interaction through the context-aware functionalities.

From the users’ point of view, the human-machine interaction within the MANTIS system supports five main high-level user tasks associated with proactive and collaborative maintenance:

  • Monitoring production assets;
  • Data analysis;
  • Maintenance tasks scheduling;
  • Reporting; and
  • Communication.

While monitoring production assets, data analysis and maintenance task scheduling are vital for proactive maintenance, reporting and communication enable collaboration among different user roles. Each of these tasks is carried out by a number of MANTIS specific functionalities that can be classified as user input, system output, user- or system- triggered operation. These functionalities should cover all the main aspects of MANTIS human-machine interaction and should also be general enough to be applicable to any MANTIS as well as potential future use case.

MANTIS HMI demonstrator

At the MANTIS meeting in Helsinki, the first version of the web based HMI demonstrator, developed in with Angular Dashboard Framework and other Javascript libraries by XLAB, was presented to the MANTIS consortium. Currently it is connected to the MIMOSA database and is demonstrating live data from the FORTUM use case. The HMI is designed as a customizable, user-dependent, responsive multi-widget dashboard, comprising basic read-only widgets, such as graphs and tables. The features of the demonstrator follow the HMI functional specifications and it is designed in the way that can be applied to any use case with MIMOSA database.

The first version of the web-based HMI demonstrator
The first version of the web-based HMI demonstrator

In the near future, many other features will be implemented, including more widget types, dashboard navigation, search function and sharing of data views. Some context-aware features, such as hidden widgets that appear when needed and suggestions of further user actions based on the usage history, will be implemented as well. In addition, general visual design recommendations such as colours, fonts and widgets positioning, described earlier in the project, will be applied.

Helsinki Consortium Meeting in May 2017, and the Conventional Energy Production Use-Case

The second (sixth overall) full consortium meeting of 2017 was held between 8th and 10th of May. This time it was hosted by VTT at their new Center for Nuclear Safety located in Espoo, Finland. The three day event gathered 65 participants from all of the participating countries. The program was more technologically oriented and contained a long open space session, where partners could present their work within the project. The tight program allowed some time to enjoy the wonderful Finnish spring weather.

The wonderful Finnish spring
The wonderful Finnish spring

Finnish use-case was prominently on display at the Open Space session at the MANTIS consortium meeting. The floor in the first open space room was dedicated to the Finnish use case. Presented were Nome, Wapice, Fortum, VTT and Lapland University of Applied Sciences (UAS). Each partner presented their work done in the Finnish use case. Wapice and Fortum presented their HMIs (IoT Ticket and TOPi respectively). Nome and VTT presented their measurement systems (NMAS and the affordable sensor research respectively) and finally Lapland UAS provided the database and REST interface that allows each partner to share and access data beyond organizational boundaries. The second room had most use cases represented. Of note was XLABs common MANTIS user interface demo that can be connected to the Finnish use case platform.

Open space session at Helsinki Consortium meeting
Open space session at Helsinki Consortium meeting

The Finnish use case is centered on a flue gas recirculation blower located in Fortum’s Järvenpää power plant. The blower is classified as a critical component in the energy production process and is monitored closely. In this use case Wapice, Nome and VTT have all provided their own sensors or virtual sensors to monitor the performance and condition of the blower. In addition, Lapland UAS has a few Wzzard sensors, made by B+B Electronics/Advantech, provide some additional measurement data bulk. However these are not related to the Järvenpää case. This measurement data is stored, using the REST interface developed by Lapland UAS, in the MANTIS database that is based on the MIMOSA data model.

Flue gas recirculation blower in Fortum’s TOPi Proview browser
Flue gas recirculation blower in Fortum’s TOPi Proview browser

The REST interface and MIMOSA database mapper provides a simple an interface, which is both easy to use and to integrate, between different applications and systems. It provides basic CRUD –functionalities and contains a mapper that maps measurement system specific data formats and structures into MIMOSA compliant data structures to ensure interoperability and compatibility with the MIMOSA data model. It is widely in use in the Finnish use case and research partners from both Slovenia and Hungary have shown interest towards utilizing MIMOSA in their use case.

A diagram of the Finnish use case
A diagram of the Finnish use case

SmartG presentation in Hannover Messe

Goizper and IK4-TEKNIKER will be present at the Hannover Messe from 24 to 28 April 2017‎ presenting Smart G, a data acquisition module for clutch-brake monitoring.

Clutch-brake systems produced by Goizper are key components in cutting, forming, folding and press machines.

The aim of this presentation is to show how incorporation of the Smart G module can convert a clutch-brake system into a monitorable smart component, which includes self-diagnostics capabilities that can provide information about the current state of the component and predict failures before they occur.

Communications modules incorporated in the Smart G component provides capabilities to:

  • Remotely monitor the component
  • Send the data to a cloud platform where all historic data are stored.

Having the data of Goizper’s clutch-brakes fleet on a cloud platform will provide the possibility to use more advanced techniques and algorithms in order to predict failures and/or remaining useful life of key components of the system.

Benefits are two-fold: Goizper will drastically improve the knowledge about their equipment to improve reliability of their products and the maintenance services provided to customers, while customers will benefit from a reduced downtime of the machines and more cost-effective maintenance strategy.

Goizper and Tekniker’s work on failure prediction and diagnosis, as well as cloud platform development, have received funding from the European Union under the MANTIS project.

 

Smart G concept block diagram
Figure 1. Smart G concept block diagram.

 

SmartG-status
Figure 2. General status of the machine.

 

SmartG-breaking

SmartG-clutching

Figures 3 and 4. Braking and Clutching processes performance

SmartG-alarms
Figure 5. Active alarms and alarms history

 

SmartG-productA
Figure 6. Product pictures
SmartG-procuctB

Classifying tool images to enhance predictive maintenance

Introduction

Philips Consumer Lifestyle (PCL) is an advanced manufacturing site located in Drachten, the Netherlands. Our organization falls within the Personal Heath business cluster of Philips, and is primarily concerned with the manufacturing of personal electric shavers.

Electric shavers are comprised of two principle component ‘blocks’: a body and a shaving unit. Each shaving unit contains three metallic shaving ‘heads’, which in turn are composed of a shaving blade (the cutting element) and shaver cap (the guard). The focus of the MANTIS project at PCL falls on the production of these shaver caps.

Philips Shaver and components
Philips Shaver and components

An electro-chemical process is used in the manufacturing of shaver caps, where an electric current is passed over the raw input material, which is conductive, in order to cut this material into the desired shape. Production of the shaver caps at PCL is fully automated.

Production Line
Production Line

Precision tooling is required throughout the various stages of shaver cap manufacturing. At present, these tools are built on-site, and are required to be kept in stock so that replacements are available in the event of tooling malfunctions. Having functional tools available around the clock is essential to meet our goal of 100% ‘up-time’ for our assembly lines. However, this is an expensive approach to resolve the problem, both in terms of the additional equipment required and extensive down-time that results from manual tooling replacements. Therefore, the timely maintenance of these tools presents a challenge.

Tool maintenance

Currently the maintenance strategy on the production line for shaver caps is a mixture of reactive and preventive maintenance. In line with the Mantis goal, our goal is to transform this towards a predictive or even a prescriptive maintenance strategy. However, this comes with the need for data. In order to perform maintenance on the tooling at exactly the right moment needed, information is necessary about the tooling to make useful decisions.

The data directly related to the current state of tooling (e.g. degree of wear, damages, etc.) is hard to retrieve in some cases, due to process-specific reasons. In our use case the tooling is delicate and very precise (micron range, difficult geometries), which makes frequent measurements of the tooling difficult and expensive in a mass production environment. Currently, there is only indirect data available about the use of the tools in the production machines, but not about the actual state of the tool itself. These data can be used to estimate, for example, the remaining useful life of a tool, but in order to improve and verify the RUL prediction models, more direct data is necessary.

Tool wear sensor

To solve this matter, a collaboration between the University of Groningen and Philips Consumer Lifestyle has been started in context of the Mantis consortium, with the goal to develop a tool wear sensor based on an optical image system.  A robust setup with a high-resolution sensor will make detailed images of the individual tools.

Tool images and labelling system
Tool images and labelling system

The raw images are preprocessed, where the parts of interest of the tool will be cut out of the image and rotated to form the input for a machine learning algorithm. Next step would be to normalize the pictures so they are more or less comparable.

Since we have no baseline, we asked our maintenance engineers (they are the domain experts) to label all these individual images. Together we choose three specific labels: wear, damage and contamination. The input of the maintenance engineers is used to train the algorithm, but also to assess how well these individual pictures are labelled similar when considering multiple engineers.

Currently, over 1500 pictures are labelled in about a month. Initial results seem to indicate that simple machine learning can outperform human labeling regarding tooling deviations.

If results are good, the trained algorithm will ultimately be used with an automatic calculation engine to run new images through the algorithm. This means that we also have to change the way of work, and provide the maintenance engineers with easy-to-use tools to make these new images, as part of their regular maintenance steps. The outcome of the analysis forms an input for determining the remaining useful life of the tool, in combination with both process and quality data.