Blog

PVTECH publishes 3E’s article on data mining for automatic fault detection and diagnosis from photovoltaic monitoring data

The continuous and systematic analysis of performance data from the monitoring of operational PV power plants is vital to improving the management and thus profitability of those plants over their lifetime. The article draws on an extensive programme undertaken by 3E to assess the performance of a portfolio of European PV power plants it monitors. The article illustrates 3E’s approach to automatic fault detection. It will explore the various data mining methodologies used to gain an accurate understanding of the performance of large-scale PV systems and how that intelligence can be put to the best use for the optimal management of solar assets.

3E’s work on automatic fault detection and diagnosis has received funding from the European Union under the MANTIS project.

Please click here to read the full article.

Modern Internet of Things technologies in industrial condition monitoring

Introduction

Wapice is a Finnish company specialized in providing software and hardware solutions for industrial companies for wide variety of purposes. We have developed remote management and condition monitoring solutions since beginning and our knowledge of this business domain has evolved into our own Internet of Things platform called IoT-Ticket. Today IoT-Ticket is a complete industrial IoT suite that includes everything required from acquiring the data to visualizing and analyzing the asset lifetime critical information.

Why condition monitoring

In predictive maintenance the goal is to prevent unexpected equipment failures by scheduling maintenance actions optimally. When successful, it is possible to reduce unplanned stops in equipment operational usage and save money through higher utilization rate, extended equipment lifetime and personnel and spare part costs. Succeeding in this task requires deep understanding of the asset behaviour, statistical information of the equipment usage and knowledge about the wear models combined with measurements that reveal the equipment current state of health. Earlier these measurements were carried out periodically by trained experts with special equipment, but now the modern IoT technologies are making it possible to gather real time information about the field devices all the time (i.e. condition monitoring). While this increases the availability of data, it creates another challenge: How to process massive amounts of data so that right information is found in the right time. In condition monitoring process the gathered data should optimally be processed so that amount of data transferred towards the uplink decreases while the understanding of the data increases.

Correct architecture leads to process, where only relevant information is traversing uplink in condition monitoring chain
Correct architecture leads to process, where only relevant information is traversing uplink in condition monitoring chain

This article describes how modern IoT technologies help in condition monitoring related processes and how data aggregation solutions makes it possible to share condition monitoring related information between different vendors. This further improves operational efficiency by enabling real time condition monitoring not only in asset level but also in plant or fleet level, where service operators must understand the behaviour and available lifetime of assets coming from different manufacturers.

WRM247+ data collector for edge analysis

The first link in the condition monitoring chain is the hardware and sensors. In order to measure the physical phenomena behind the wear of the asset a set of sensors is required to sample and capture the data from monitored devices. This data must be buffered locally, pre-processed and finally only the crucial information must be transferred to the server, where the physical phenomena can be identified from the signals. Depending on application area a different types and models of sensors are required to capture just the relevant information. Also depending on the physical phenomena a different kind of analysis methods are required. Due to these reasons the various measurement systems have so far been custom tailored according to the target. This approach of course works, but designing custom tailored measurement systems is time consuming and expensive. Our approach to overcome these problems has been to implement such IoT building blocks that adapt into wide variety of purposes and can be easily and flexibly taken into use. One of the cornerstones in our system is the flexibility and user friendliness.

In the hardware side our IoT platform offers several approaches. WRM247+ measurement and communication device is our hardware reference design that allows connecting a wide variety of industrial sensors using either wired or wireless communication methods and also provides local buffering and pre-processing of data as well as communication to the server. Examples of supported standard protocols are CAN, CANOpen, Modbus, ModbusTCP, 1-Wire and digital/analog IOs. This device is an excellent starting point for most common industrial measurement purposes.

WRM247+ Measurement and communication gateway
WRM247+ Measurement and communication gateway

In Mantis project Wapice has been investigating the interoperability of the wireless and wired sensors. In use case 3.3. Conventional Energy Production we will demonstrate the fusion of the wireless Bluetooth Low Energy technology and wired high accuracy vibration measurements. In order to achieve this we have built a support for connecting IEPE standard vibration sensors to the WRM247+ device. The device supports any industrial IEPE standard sensor, which makes it possible to select a suitable sensor according to the application area. Additionally we have also built a support for connecting a network of BLE sensors to the device. In use case the purpose of this arrangement is to gather temperature information around the flue-gas circulation blower using the wireless BLE sensors and perform vibration measurements from the rolling bearing. The temperature measurements reveal possible problems e.g. in lubrication of the bearing and possibly allow actions to be taken before a catastrophic failure happens.

In case WRM247+ device is not suitable for purpose, it is possible to integrate custom devices easily into IoT-Ticket using the REST API available. For this purpose we provide full documentation and free developer libraries for several programming languages. The list of available libraries includes for example C/C++, Python, Qt, Java and C#. Other integration methods include for example OPC or OPC UA and Libelium sensor platform, that supports e.g. wireless LoRa sensors. In addition Wapice has a long experience in designing machine-to-machine (M2M) solutions including PCB layout, embedded software design and protocol implementation, so we also offer you the possibility to get tailored Internet of Things hardware or embedded software that fully suit your needs.

 

Figure describes the interface options in IoT-Ticket architecture
Figure describes the interface options in IoT-Ticket architecture

 

Iot-Ticket portal for Back-End tools

In the back-end side IoT-Ticket provides all necessary tools for visualizing and analyzing data. Our tools are web based and require no installation: Simply login, create and explore!

The Dashboard allows users to interact securely with remote devices, check their status, view reports or get status updates on the current operational performance. It can be utilized in various scenarios – e.g. vehicle tracking, real time plant or machinery monitoring and control. As many Dashboard Pages are available the user can switch between different contexts and drill into information starting from enterprise level to sites, assets and data nodes. The Dashboard also includes two powerful tools for content creation: Interface Designer and Dataflow Editor.

Using the Interface Designer user can raw new elements or add images, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the Dashboard. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them.

The Dataflow editor is an IEC 61131-3 inspired, web-based, graphical block programming editor that seamlessly integrates to the Interface Designer. A user can design the dataflow by connecting function blocks to implement complex logic operations which then can be used to execute control actions or routed to user interface elements for monitoring purposes.

In Use Case 3.3. Conventional Energy Production Wapice – together with Finnish partners – demonstrates Cloud-to-Cloud integration in Mantis platform using the IoT-Ticket platform tools. In this use case LapinAMK and VTT have jointly setup a Microsoft Azure based MIMOSA data aggregation database. The plan is to share condition monitoring related KPI information through MIMOSA database, which allows sharing data through REST API. Devices may either push data directly to MIMOSA or through local clouds.

Data sharing and aggregation in Mantis platform using Mimosa aggregation database
Data sharing and aggregation in Mantis platform using Mimosa aggregation database

 

IoT-Ticket allows communication to REST sources using Interface Designer graphical flow programming tools. Getting data from REST sources is done by simply creating a background server flow that contains a trigger and REST block. The REST block is configured with username and password that allow authentication to REST source, source URL and REST method that contains XML/JSON payload. From the REST response the data value is parsed and output to data charts or forwarded to further processing. Additionally virtual data tags allow forwarding the data into IoT-Ticket system. By configuring the flow to be run in server mode, the flow is run silently in background all the time. The operation interval is configured using the timer block, which fires the REST block in certain intervals. An example video below shows how Cloud-to-Cloud communication between MIMOSA and IoT-Ticket is established in Use Case 3.3. In this example video sinusoidal test data originates from LapinAMK enterprise and tachometer RPM reading from system under test using WRM247+ device.

The reporting and analytics tools add on to the platform features. The report editor integrates seamlessly into the Dashboard and offers a user the possibility to create or modify content. The user can draw new elements or add image, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the report. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them. Analytics tool is also integrated to the Dashboard and supports you in understanding your data better.

Benefits of IoT in condition monitoring

Typically condition monitoring related data has been scattered around in separate information systems and it has been very hard or even impossible to create a single view of the all relevant data or correlate between information scattered in different databases. MIMOSA is an information exchange standard that allows classifying and sharing condition monitoring related data between enterprise systems. It answers the data sharing problematics by allowing aggregation of crucial information into single location in a uniform and understandable format. When interfaced with modern REST based information sharing technologies that utilize for example JSON or XML based messaging it is surprisingly easy to collect and share crucial information using single aggregation database. When accompanied with modern web based Industrial IoT tools it is then easy to visualize data, create reports or perform further analysis using only the crucial information available.

In this blog posting I have highlighted some examples how Industrial IoT building blocks help you to gather relevant condition monitoring information, create integrations between data sources and aggregate business relevant information into single location. Focusing on the crucial information allows you to understand better your assets and predict the maintenance needs. This is needed, when optimizing the value of your business!

See more examples in our web pages (www.iot-ticket.com) or YouTube channel https://www.youtube.com/channel/UCJt9c3edgH7cQdSYYIH0YbQ/videos

 

 

MANTIS for Compressor maintenance

With SMARTLINK monitoring program, Atlas Copco makes use of connectivity data and data intelligence to help customer to keep up their production uptime and to improve, when possible, energy efficiency.

With approximately 100,000 machines connected with SMARTLINK, Atlas Copco makes compressors in the field communicate directly with the back office and their service technicians.

Atlas Copco’s SMARTLINK technology allows for remote monitoring of compressors in the fields.
Atlas Copco’s SMARTLINK technology allows for remote monitoring of compressors in the fields.

Customers become more proactive, planning is more efficient and reliability of the compressed air installations is better than ever before.

Customers of SMARTLINK get a monthly overview of machine information, including running hours and the time left before service, thus allowing them to order a service visit at the right time, maintaining maximum uptime and energy efficiency.

With SMARTLINK they can closely follow up on machine warnings via email or SMS. With this information they can take the necessary actions to prevent a breakdown.

With the MANTIS project, Atlas Copco will take proactive maintenance to the next level, by:

  • predicting the remaining useful life of consumables and components that are subject to wear
  • detecting upcoming problems or inefficiencies before they deteriorate
  • remotely diagnosing the root cause of an unplanned shutdown
During the MANTIS project, Atlas Copco will take proactive maintenance to the next level by predicting component lifetime, by detecting anomaly and by perform remote diagnosis of the compressed air installation.
During the MANTIS project, Atlas Copco will take proactive maintenance to the next level by predicting component lifetime, by detecting anomaly and by perform remote diagnosis of the compressed air installation.

Moreover, in order to reduce communication costs, smart sensing technology is being investigated, or how local preprocessing of information can significantly reduce the amount of data to be transmitted.

A major challenge for Atlas Copco is the huge variety of compressor types and operating conditions. To process this enormous amount of information, self-learning techniques are combined with physics-based compressor models. Eventually, these will enable the discovery of new patterns in data, collected on a worldwide scale.

The ultimate goal is to translate these data into actionable information for the global service network.

Service interventions will be planned even better and will be shorter and more efficient. Problems will be fixed in one visit, as technicians will know in advance what to do and what parts to bring.

The results of the project will allow better service planning, shorter visits and first time fix, thus reducing downtime for the end customers and ensuring sustainable productivity
The results of the project will allow better service planning, shorter visits and first time fix, thus reducing downtime for the end customers and ensuring sustainable productivity

For the customer, this means no unnecessary maintenance, less planned or unplanned downtime, therefore achieving maximum productivity.

Analyzing maintenance log data to predict system failures

Cyber-Physical Systems (CPS) are often very complex and require a tight interaction between hardware and software. As it happens in almost any software systems, also CPS  generate different kinds of logs of the activities performed, including correct operations, warnings, errors, etc. Frequently, the logs generated are specific to the different subsystems and are generated independently. Such logs contains a wealth of information that needs to be extracted and that can be analyzed in different ways to understand how the single subsystem behaves and even retrieve information about the behavior of the overall system. In particular, considering the generated logs, it is possible to:

  1. Analyze the behavior of a single subsystem looking at the data generated by each one in an independent way;
  2. Analyze the overall behavior of the system looking at the correlations among the data generated by the different subsystems

Such data are very useful to understand the behavior of a system and are often used to perform post-mortem analysis when some failures happen. However, such data could also be used to understand in a more comprehensive way how the system behaves through a real-time analysis able to monitor continuously the different subsystems and their interactions. In particular, it is possible to focus on preventing failures through predictive maintenance triggered by specific analysis.

Making predictions about system failures analyzing log files is possible but such predictions are strictly related to some characteristics of such files. In particular, some very important characteristics are: data generation frequency, information details, history.

The data generation frequency needs to be related to the prediction time and the time required to take proper actions. For example, if we need to detect a failure and take proper action in a few minutes, we need to use data generated with a higher frequency (e.g., in the scale of the seconds) and we cannot use data generated with a lower one (e.g., in the scale of the hours). This requirement affects the ability to make predictions and their usefulness to implement proper maintenance actions.

The information details provided need to include proper granularity and meaningful massages. In particular, it is important to get detailed information about errors, warnings, operations performed, status of the system, etc. The specific details required are tightly connected to the specific predictions that are needed. Moreover, the finer the granularity of the information, the higher are the chances of being able to create a proper prediction model.

High quality data history is required to build proper prediction models. However, just having a large dataset is not enough. Historical data need to be representative of the operating environment and include all the possible cases that may happen during  operations. In particular, it is required to have information about the log entries and the actual behavior of the system to create a reliable model of the reality.

The requirements described are just a first step towards the definition of a proper predictive maintenance model but they are essential. Moreover, the proper approaches and algorithms need to be selected based on the specific system and the related operating conditions.

Reference Architecture of the Portuguese Mantis Pilot

Introduction

The MANTIS Steel Bending Machine pilot aims at providing the use case owner – ADIRA – a worldwide remote maintenance service to its customers. The main goal is to improve its services by making available new maintenance capabilities with reduced costs, reduce response time, avoiding rework and allowing for better maintenance activities planning.

To this purpose existing ADIRA’s machines (starting with their high end machine model – the Greenbender) will be augmented with extra sensors, which together with information collected from existing sensors will be sent to the cloud to be analyzed. Results made available by the analysis process will be presented to machine operators or maintainers through a HMI interface.

adira greenbender gb-22040 MANTIS
Adira –  Greenbender GB-22040

A number of partners are involved into the development and testing of the modules, which regard the communication middleware (ISEP, UNINOVA), data  processing  and  analytics activities (INESCISEP), the HMI applications (ISEP), and a stakeholder providing a machine to be enhanced with the MANTIS innovations (ADIRA).

System Architecture

The distributed system being built responds to a reference architecture that is composed by a number of modules, the latter grouped into 4 logical blocks: the Machine under analysis, Data Analysis module, Visualization module, and the Middleware supporting inter-module communications.

architechture of the maintenance system for MANTIS
architecture of the maintenance system for MANTIS

Machine

Data regarding the machine under analysis are collected by means of sensors, which integrate with the machine itself. This logical block consists thus of data sources that will be used for failure detection, prognosis and diagnosis. This set of data sources comprises an ERP (Enterprise Resource Planning) system, data generated by the machine’s Computer Numerical Controller (CNC) and the safety programmable logical controller (PLC).

Middleware

This logical block operates through two basic modules. The first is the MANTIS Embedded PC, which is basically an application that can run on a low cost computer (like a Raspeberry Pi) or directly on the CNC (if powerful enough). This module is responsible for collecting the data from the CNC I/O and transmitting it to the Data Analysis engine for processing and is implemented as a communication API. When based on an external computer, this module also connects to the new wireless MANTIS sensors placed on the machine using Bluetooth Low Energy protocol (BLE). Communications are then supported by the RabbitMQ message oriented middleware, which takes care of proper routing of messages between peers. This middleware handles both AMQP and MQTT protocols to communicate between nodes.

The I/O module is used in order to extract raw information from the machine sensors which is collected by the existing PLC, made available on the Windows-based numerical controller through shared memory and then written to files. Our software collects sensor data from these files, thus completely isolating the MANTIS applications from the numerical controller’s application and from the PLC.

Data Analysis

This logical block takes care of Data Analysis and Prediction, and it exploits three main modules. The first is a set of prediction models used for the detection, prognosis and diagnosis of the machine failures. The second is an API that allows clients to request predictions from the models, and that can respond to different paradigms such as REST or message-queue based. Finally, the third module is a basic ETL subsystem (Extraction, Transformation and Loading) that is responsible for acquiring, preparing and recording the data that will be used for model generation, selection and testing. This last module is also used to process the analytics request data as the same model generation transformations are also required for prediction.

Visualization

This logical block consists of two modules, the human machine interface (HMI) and the Intelligent Maintenance DSS. The HMI is designed to be a web-based mobile application, and to be accessed via the network from any computer or tablet. The HMI is developed to work in two different modes, depending on which kind of user is accessing it. In fact, the HMI is developed to support two user types, the data analyst and the maintenance manager, allowing both of them to analyze the machine’s status, record failure and diagnostics related data. Moreover, the data analysis HMI provides an interface with the data analyst, allowing the consultation and analysis of data and results. On the other hand, the maintenance management HMI allows for consulting predicted events and suggested maintenance actions.

The second module is an Intelligent Maintenance DSS, which uses a Knowledge Base that uses diagnosis, prediction models and the data sent by sensors. On top of this Knowledge Base there is a Rule based Reasoning Engine that includes all the rules that are necessary to deduce new knowledge that helps the maintenance crew to diagnose failures.

Ongoing work

The work performed so far is well advanced and an integration event will occur in the near future where the interconnection between all systems will be tested and validated.

The demonstrator being built, will be evaluated according to the following criteria: prediction model performance (live data sets will be compared to model generation test   sets) and the applications usability (the user should access the required information easily, in order to facilitate failure detection and diagnosis).

Fast prototyping of service robot behavior for a cleaning and tidying task in maintenance

The MANTIS project is concerned with predictive maintenance on the basis of big data streams from large (industrial) operations. At the end of the processing pipe line, planning suggestions for maintenance actions will be the result. Usually, maintenance is performed by human operators.

However, with current developments in machine learning, AI and robotics, it becomes interesting to see what type of ‘corrective actions’ in maintenance could be performed by industrial service robots.

In industrial production lines it is common to observe fairly short times between failure, especially in long chains. Whereas individual components are often designed to function extremely well, for instance under a regime of ‘zero-defect manufacturing’, the performance of the line as a whole may be disappointing. What is more, the actions performed by human operators to solve the problems may be very mundane and simple, such as removing dirt due to fouling or lubricating critical components. With the current advances in robot hardware and software technology, it becomes increasingly attractive to automate such maintenance actions. Whereas maintenance in the form of module- or part replacement are too difficult for current state-of-the-art robotics, cleaning and tidying is definitely possible.

With this application domain in mind, a laboratory setup was designed for quickly developing a robotic maintenance task for the purpose of demonstration by a master student team (Francesco Bidoia, Rik Timmers, Marc Groefsema) under guidance of a PhD student (Amir Shantia). We were able to realize a rapid configuration of our existing mobile robot platform to realize simple cleaning and tidying actions, similar to what is needed in basic industrial maintenance tasks. The demonstration involves speech control, navigational autonomy, work piece approach and dynamic reactivity to three object types, using tool switching. Objects are considered to be either a) untouchable, or b) removable by hand, or to consist, c) of small fragments (cf. ‘dirt’) that needs to be brushed away. In three weeks, a full demonstration could be developed by the student team, using a mobile robot with a single arm that was designed earlier, for Robocup@Home tasks:

The robot in our demonstration uses the light-weight carbon-fiberarm by Kinova (http://www.kinovarobotics.com/), a self-made transport base, standard Kinect sensors (for generating 3D point clouds) and digital cameras for vision. Programming was done using the ROS environment, with a pre-existing code base in C++ and Python.It is evident that by using current commercial existing mobile platforms such as KUKA (http://www.kukarobotics.com/en/products/mobility/KMR_iiwa/), MIR (http://mobile-industrial-robots.com/en/multimedia-2/videos/) and Universal Robots (https://www.universal-robots.com/), a similar, more sturdy industry-level system can be constructed.

Watch the whole demonstration here:

 

1st CREMA/C2NET Industrial Workshop

The 1st CREMA/C2NET Industrial workshop will take place the 24th November at Orona Fundazioa facilities located in Hernani (Basque Country – Spain). The event, organised by CREMA and C2NET H2020 EU projects, is intended to present future trends of European Industry especially those related to digitalization technologies applied to manufacturing. High levels speakers from the Basque Government, the European Commission, and the Industry sector (ill give their expert vision.

Moreover, CREMA and C2NET will present findings generated in both projects highlighting their approaches to meet above challenges. Presentations and practical demonstrations will be made by partners of both projects to present innovative solutions based on digital platforms in the Cloud to boost collaboration among manufacturing companies. Advanced Cloud technologies and applications will be shown to allow manufacturing companies faster and more efficient decision making for a better use of their manufacturing assets. Different business models and exploitation strategies followed by both projects to bring their outcomes to the market will also be presented.

Some MANTIS partners such as MGEP, IKERLAN, TEKNIKER, MCC, FAGOR ARRASATE and GOIZPER will attend this event to know other EU projects approaches to deal with common research areas and to make new contacts for potential collaboration actions in the future.

There is still time for registration accessing the website of the event: http://www.crema-c2networkshop.com/. We encourage you to do so.

Agenda:

09:00 – 09:30

Registration

09:30 – 09:40

Opening session

Welcome and event presentation

·       Eduardo Saiz, IK4-IKERLAN –  CREMA Project Researcher & C2NET Project Manager

Basque Government short talk

·       Alexander Arriola, General Director of SPRI/Basque Government

9:40 – 10:00

First Keynote: Industry 4.0 Implementation Strategy

·       Eduardo Beltrán, Innovation & Technology Director of MONDRAGON Corporation

10:00 – 10:15

Digitising European Industry

·       Max Lemke, Head of Unit Components and Systems, European Commission, DG CONNECT

10:15 – 11:00

The CREMA / C2NET viewpoint on future Industrial trends and a taste of the services that can be deployed in the Industrial Arena

·       Tim Dellas, ASCORA – CREMA Project Coordinator

·       Jorge Rodriguez, ATOS – C2NET Project Coordinator

11:00 – 11:30

Coffee Break

11:30 – 12:15

CREMA – Cloud Services for the Manufacturing Sector

·       Jon Rodriguez, FAGOR ARRASATE – CREMA Use Case I: Machinery Maintenance WP Leader

·       Mikel Anasagasti, GOIZPER – CREMA Use Case I: Machinery Maintenance Partner

·       Aizea Lojo, IK4-IKERLAN – CREMA Project Manager

·       Jessica Gil, TENNECO – CREMA Use Case II: Automotive WP Leader

12:15 – 13:00

C2NET – The complete Networked solution for Industry

·       Raúl Poler, UPV – C2NET Processes Optimization of Manufacturing Assets WP Leader

·       Carlos Agostinho, UNINOVA – C2NET Continuous Data Collection Framework WP Leader

·       Jacques Lamothe, ARMINES – C2NET Tools for Agile Collaboration WP Leader

13:00 – 14:00

Lunch break

14:00 – 14:45

Second Keynote: Industry 4.0 – How to master challenges to exploit new business opportunities

·       Stefan Zimmermann, Head of Global Vertical Manufacturing, Retail and Transportation market at ATOS

14:45 – 15:15

Interactive Session: Feedback from audience to capture the Pros and Cons if Industry 4.0. What hurdles need to be overcome  from an Industrial viewpoint

·       Moderator: Gash Bhullar, TANet – CREMA Impact WP Leader

15:15 – 15:45

CREMA / C2NET Response to the Interactive Session and potential solutions to the Industry 4.0 Implementation and Deployment Strategies

·       Moderator: Gash Bhullar, TANet – CREMA Impact WP Leader

15:45 – 16:00

Coffee Break

16:30 – 17:00

Panel discussion

·       Moderator: Gash Bhullar, CREMA TANet – Impact WP Leader

 

Closing remarks

·       Eduardo Saiz, IK4-IKERLAN –  CREMA Project Researcher & C2NET Project Manager

Presentation of the Mantis Project at Sirris’ seminar: fleet-based analytics for data-driven operation and maintenance optimization

On October 24th Sirris organized an industrial seminar on the opportunities and challenges related to fleet-based data exploration. During this seminar, a general introduction to the MANTIS project was first given, followed by presentations from several partners within theMANTIS project including: the Mondragon University (press machines), the Eindhoven University of Technology (shaver manufacturing), 3E (Photovoltaic Plants), Ilias Solutions (Vehicles), Atlas Copco (compressors) and Sirris. The event was a real success with around 45 participants and offered participants via real-world use cases in the different industrial domains mentioned above the opportunity to see how data-driven analytics on a fleet of machines can optimize the operation and maintenance of those.

Sirris presentation MANTIS
Tom Tourwe introduced the MANTIS project

 

Mondragon presentation MANTIS
Urko Zurutuza presented the Press Machine Maintenance Use-Case

If you would like to have further information on the outcomes of this seminar, please contact Caroline Mair (caroline.mair@sirris.be)

Deep learning for predictive maintenance

There are two extreme approaches to predicting failures for predictive maintenance. The white box approach relies on manually constructed physical and mechanical models for predicting the failures. The black box approach, on the other hand, relies on failure prediction models constructed using statistical and machine learning methods based on the data gathered from a running system. The figure below illustrates such data driven failure prediction for a machine monitored by three sensors.

data driven failure prediction
data driven failure prediction

Machine learning algorithms are used to identify failure patterns in the sensor data that precede a machine failure. When such patterns are observed in operation, an alarm can be triggered to take corrective action to prevent or mitigate the eminent failure. For example, failure predictions can be used to optimize the maintenance actions, such as scheduling the service engineers or managing the spare parts storage to reduce the downtime cost.

Automatic feature extraction

An important part of modeling a failure predictor is selecting or constructing the right features, i.e. selecting existing features from the data set, or constructing derivative features, which are most suitable for solving the learning task.

Traditionally, the features are selected manually, relying on the experience of process engineers who understand the physical and mechanical processes in the analyzed system. Unfortunately, manual feature selection suffers from different kinds of bias and is very labor intensive. Moreover, the selected features are specific to a particular learning task, and cannot be easily reused in a different task (e.g. the features which are effective for predicting failures in one production line will not necessarily be effective in a different line).

Deep learning techniques investigated in the MANTIS project offer an alternative to manual feature selection.  It refers to a branch of machine learning based on algorithms which automatically extract abstract features from the raw data that are most suitable for solving a particular learning task. Predictive maintenance can benefit from such automatic feature extraction to reduce effort, cost and delay that are associated with extracting good features.

Sirris seminar on fleet-based analytics for data-driven operation and maintenance optimization

On October 24th Sirris is organizing in Belgium an industrial seminar on the opportunities and challenges related to fleet-based data exploration. During this event Belgian as well as other European industrial partners from the MANTIS project will present their experience with fleet-based analytics (based on use-cases from the MANTIS project).

Many companies operate a fleet of machines that have a similar, almost identical behaviour in terms of internal operation, application and usage, such as for example windmills, compressors and professional vehicles. This set of almost identical machines is defined as ‘a fleet’.

In addition, more and more, those machines are equipped with several (smart) sensors, that can capture data on operational temperature, vibrations, pressure and many other features, depending on the machine. In addition, the communication and data storage technologies are becoming ubiquitous, making it possible to gather the data in a central platform and derive insights into normal and anomalous behaviour across the entire fleet of machines. By comparing for example the behaviour of a single machine to the rest of the fleet, one can identify if a machine is underperforming due to misconfiguration or imminent failure. The analysis of this data can also help service and maintenance personnel to have a more detailed and optimised maintenance planning, e.g. ensuring an optimal distribution of the entire fleet in terms of remaining useful life, in order to manage the work load of the service engineers. Therefore, the exploitation of the data collected on a fleet of machines is a real asset for maintenance and service personnel and, at a larger scale, for an entire company.

You are interested in this event? Check out the event’s agenda and register here

Programme

13:00 – 13:15: Registration and coffee

13:15 – 13:30: Setting the scene (Sirris)

13:30 – 14:30: MANTIS project: Cyber Physical System based Proactive Collaborative Maintenance

    • Project goals and challenges by Sirris
    • Fagor use case (press machines) – title to be announced
    • Philips use case (shaver manufacturing) – title to be announced

14:30 – 15:30: Root cause analysis

    • Barco – Vitriol: let open source data science talk quality and business at Barco Projection
    • 3E – Data-driven Fault Detection for Photovoltaic Plants: Data Quality, Common Faults and Data Annotation
    • Atlas Copco – SMARTLINK & root-cause analysis on compressors worldwide to improve on operational efficiency

15:30 – 15:50: Coffee break

15:50 – 17:10: Failure prediction & Operational optimisation

    • Barco – LightLease Predicting Lamp Behaviour in Digital Cinema
    • Ilias – Towards Predictive Vehicle Fleet Management
    • Maintenance Partners – Performance optimisation and failure prediction of wind turbines
    • Pepite – Analytics for operational optimisation

17:10 – 17:30: Closing remarks (Sirris)

17:30 – Networking reception