Embracing AI to elevate equipment effectiveness
Almost all electronic devices today are equipped with one or more sensors. They collect a wealth of data – about how the systems are doing, what they’re doing and in which conditions they’re doing that. All this data can potentially be of great value to your operation. Most of the potential is currently being left unused, though. This is a shame because it gives you a unique opportunity to improve your so-called overall equipment effectiveness (OEE).
OEE is a measure of how well your manufacturing operation is utilized in terms of facilities, time and material, compared to its full potential, during the periods when it’s scheduled to run. It identifies the percentage of manufacturing time that’s truly productive. An OEE of 100 percent means that you make flawless products (100 percent quality), at maximum speed (100 percent performance), without interruption (100 percent availability). By measuring OEE and the underlying losses, you’ll gain important insights on how to systematically improve your manufacturing process.
Setting up machine monitoring
Measuring OEE means retrieving data from your systems and sending them to the cloud for analysis. This poses a number of challenges. An important obstacle is sentiment: you’d rather not share your data with the outside world, especially data that may reveal company-sensitive information. With appropriate security measures, however, the risk of your data falling into the wrong hands can be minimized, while anonymization can adequately address any privacy issues. And you can always decide not to release the data that really give you a competitive edge.
Once the sentiment has been turned, there’s the challenge of setting up the data processing pipeline. Your machine pool most likely includes systems of different makes, each with their own control and communication technology, and of different ages, from new with modern capabilities to decades-old with limited controls and connectivity. It’ll take some effort to align your systems and their data streams and bring them together in a single environment where you can assess your OEE.
When you have the infrastructure in place and your pipeline operational, there’s the challenge of making sense of all the data that come pouring in. This may look like a mountain of work, requiring a lot of time and effort to climb. But with specialist help and intelligent use of intelligent tooling, you can actually start small and clear-cut.
Connecting to the network
Creating a data processing pipeline starts with hooking up your diverse machine pool to your company network. The data are then pulled from the machine controls and sent to the cloud. The cloud platform, provided by companies such as Amazon (AWS), Google (Cloud), IBM (Cloud) or Microsoft (Azure), stores the data in a so-called data lake. It also offers the possibility to automatically analyze them. These analytics transform the data into actionable information, which is sent back to a dashboard on your personal computer that presents it in an insightful way.
The newer systems in your machine pool are equipped with sensors and wireless connectivity that provides remote access to the machine controls and sensor data. Older machines may need to be retrofitted with such connectivity. You can do this by hooking up a wireless gateway to the control’s serial or Ethernet port or – when such a port is unavailable – by using a special device that takes the control’s analog or digital I/O as input and has wireless connectivity of its own to communicate with your company network.
Your systems don’t always immediately provide all the data you need to adequately assess your OEE. Maybe they fail to measure a key characteristic or maybe the sampling frequency is too low. To broaden the scope, you can choose to install additional sensors, either off-the-shelf or custom-made.
Preprocessing the data streams
Different system makes may use different communication protocols, such as Modbus, MTConnect or OPC UA. To be able to correlate the data from different sources, the different data streams need to be structured in a uniform way. This is where artificial intelligence (AI) comes in. The cloud platform offers machine learning (ML) algorithms that take the raw data, analyze them for (recurring) patterns and structure them accordingly. Unsupervised learning, for instance, infers the organization of the data and constructs a model without pre-existing labeling and without human supervision. Usually, however, it’s more efficient to have a service engineer assist in interpreting the data.
Once structured, the data may require cleaning up. There can be some noise mixed in, eg from sensor hiccups and other one-off events that are irrelevant for your OEE assessment. ML techniques can identify and filter out these meaningless quirks, readying your data for the heavy lifting.
You can have the raw data (semi)automatically processed on your premises – at the so-called edge. To do so, you can put a special edge device between the machine control and the company network. Connected by wire or wirelessly to the control, the device collects the data from the system and already does some work on them locally. This gives you some speed gains, your data stay on your premises and you still have some analytics available even when your Internet connection is down.
Distilling actionable information
For more powerful analytics, ML and other AI algorithms can be unleashed on the data lake in the cloud. They distill actionable information from the stored structured and cleaned data. For instance, they can look for what’s known as the “six big losses” to OEE: production rejects and rejects on start-up (quality), minor stops and speed loss (performance) and planned downtime and breakdowns (availability). With the results, presented in the dashboard, you can decide on the right course of action.
Anomaly detection, for example, can identify rare events that raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items will translate to some kind of problem such as a machine being in an inherently different state than a pre-established baseline. By using only measures that are imperative to the system’s operation, the technique saves cost, as no additional sensors need to be installed and tuned.
The cloud providers offer all kinds of automated models for building ML applications. These out-of-the-box models are easy to use yet very generic. If you want to get the most out of your application, you’re better off developing your own models. You can get extensive support for this from the cloud platforms but as it requires in-depth knowledge about what makes a good data set, specialist data science expertise will again prove to be of invaluable assistance.
Deploying data scientists
In these AI-supported transformation and decision processes, data scientists thus play a key role. They unify statistics, data analysis, machine learning and their related methods in order to understand and analyze actual phenomena with data. Rather than just seeing numbers, a data scientist understands what they mean and how to use the AI toolbox to get the desired information.
You can hire a data scientist or you can buy the service from a consultancy firm. The most important reason to get one of your own is that an internal specialist can quickly master your domain. Furthermore, hiring one also ensures continuity and prevents the process from going wrong once a project has been completed. It’s also much easier to use data internally than to share them with an external party where trust isn’t immediately ingrained.
Getting things going
With specialist expertise and adequate tooling, it’s fairly straightforward to turn your machine pool into a data factory. By getting your production systems connected and having their sensor readings automatically processed and presented, you can obtain valuable insights into the quality, performance and availability of your manufacturing operation. You’re provided with actionable information enabling you to tune these parameters and elevate your OEE.
All the required expertise and tooling is readily available. You can start small and keep it that way, by calling in outside help to get things going and have them do a periodical checkup. Or you can grow it as big as you like, by setting up your own data science machinery. Either way, you can benefit greatly by tapping into your data well and use its resources to grease your manufacturing operation.
The members of the High Tech Software Cluster can help you with specialist expertise and adequate tooling. Contact us for more information.