Sensor AI has a role not only in industry, but also in sectors such as medicine and consumer electronics

Corporate artificial intelligence and robotics are no longer just futuristic concepts but are becoming an integral part of everyday business operations. They enable companies to improve efficiency, productivity, and responsiveness to change. They also help with product innovation. One of the key aspects of this integration is the use of sensor AI, which allows for data collection and analysis using a variety of sensors and devices.

Let’s take a look at some examples of how sensory AI can transform various industries and innovate businesses. The following examples illustrate practices already in use today:

Industrial automation: In industrial automation, sensor AI is used to monitor and control manufacturing processes. Sensors can monitor essential parameters such as temperature, pressure, or humidity, but also detect microscopic changes in the environment that could signal potential issues. For example, sensors detecting changes in air pressure can warn of impending equipment failure, allowing maintenance to be carried out before the problem becomes serious.

Medicine: Sensors enable the monitoring of heart rate, blood pressure or glucose levels. This data can be analysed by artificial intelligence, for example to diagnose and monitor health conditions or to predict the future course of a disease or a patient’s response to treatment. Sensory AI detects patterns of changes in blood pressure that may predict when the next hypertensive crisis will occur and warn the doctor or patient well in advance.

Autonomous vehicles: Sensors are crucial for collecting data about the surrounding environment. In addition to traditional sensors such as lidar, radar and cameras, modern vehicles often use other advanced sensors such as ultrasonic sensors to detect obstacles in the vicinity of the vehicle or sensors to measure road quality. This information is essential for the proper functioning of autonomous systems, which must be able to quickly and accurately respond to various situations on the road to ensure safe driving.

Smart cities: Sensor AI in smart cities is used to monitor traffic, air quality, noise levels, and other factors affecting the environment. Modern sensors measure essential parameters and identify specific pollution or problems in public infrastructure. For example, a sensor network in a city can detect gas leaks in the distribution network and automatically alert the relevant authorities, enabling rapid action.

Wearables: Sensors in electronics such as smart watches or fitness bracelets collect data on movement, heart rate and other physiological parameters. This information is not only used for personal monitoring and improving the health of users but can also be shared in the form of anonymised data with research institutions or public health organisations to analyse and predict epidemics or to track population health trends.

Why don’t Czech companies use AI?

Despite all these potential benefits, many Czech companies are still hesitant to implement AI into their processes. There are several reasons for that.

Firstly, there is a shortage of qualified experts in the Czech Republic, who would be able to design and implement AI systems into corporate infrastructure.

According to RSM, a local IT consulting firm, 48% of companies have the technical conditions for rapid implementation of AI, but the development is hindered by both managers and legislation. According to the analysis, specific challenges such as managers’ low willingness to bear the risks associated with pioneering phases of AI implementation, including legislative and security aspects (e.g., personal data protection), are obstacles. It may be difficult to agree across the company on how the corporate AI should work. Moreover, significant revisions of existing legislation and updates to the national AI strategy are needed, a process that is still in its early stages.

Some companies don’t have a clear idea on how to use AI to improve their processes or innovate products and services. This lack of awareness may lead to a lack of motivation for investment in AI technologies.

However, organisations should not resist this trend. In countries such as Japan and the US, AI is already widely used, including in autonomous taxis. Once Czech companies overcome their concerns and embrace AI as an essential part of their operations, they can enjoy higher efficiency, innovation, and a competitive advantage. There is hardly any company that cannot benefit from what AI has to offer, be it from small things such as data processing and analysis, to process automation, automated car control, to fully autonomous factory or shop floor operation.

The age of semantic automation

The combination of Robotic Process Automation (RPA) and Artificial Intelligence (AI) creates a powerful symbiosis that can elevate business productivity and efficiency to a new level. Semantic automation, based on generative artificial intelligence, is a driving technology with the potential to fundamentally change the way companies operate.

In a time when digital transformation is a necessity rather than just a trend, RPA is becoming a major player. With its ability to automate mundane, repetitive tasks, RPA significantly enhances employee productivity. Together with AI, they form a synergistic duo, combining automation with the creativity of the human mind.

According to IDC, automation in companies reduces operating costs by 13.5% and saves an average of 1.9 hours of work per week (source: Worldwide Automation Spending Guide 2022 by IDC) per employee. These figures highlight the transformational potential of RPA and AI in increasing productivity and reducing costs.

RPA and generative AI – the combo for perfect automation

Generative artificial intelligence, as a subset of AI, focuses on the creation of content or data rather than just processing it. It uses machine learning techniques such as neural networks and deep learning to create new content in various forms. “Generative AI models learn from existing data and use this knowledge to produce original, creative and contextually relevant output,” explains Viktória Lukáčová Bracjunová, Head of Robotics and Automation at Soitron.

RPA excels in repetitive tasks, follows rules and procedures, doesn’t make mistakes and doesn’t need breaks. It dominates in structured processes, minimizing deviations. It is a key technology component for companies to reduce costs, reduce errors and speed up routine tasks.

Using semantic automation in a dynamic environment

In a dynamic automation environment, the combination of RPA and generative AI creates a powerful synergy that goes beyond the capabilities of either technology alone. “RPA successfully handles routine tasks, while generative AI is strong at processing complex, unstructured data and solving creative challenges. RPA ensures process consistency and minimises errors, while generative AI analyses data and provides deeper insights, improving the quality of strategic decisions,” says Viktória Lukáčová Bracjunová.

In the field of AI and natural language processing, semantics play a crucial role. It provides the foundation for creating advanced generative artificial intelligence systems that are better able to understand and interact with human language, which is crucial for the success of many AI applications.

Implementable in any company

RPA’s integration with existing systems and applications makes this technology an ideal choice for automating tasks within existing workflows. Next-generation automation can work with a variety of data types and formats, ensuring compatibility with a wide range of processes. Generative AI integration enhances customer experience through personalized interactions, understanding natural language and solving complex queries with empathy.

Where semantic automation can help

  • Advanced Natural Language Processing (NLP) capabilities – it can understand and respond to the customer’s natural language, which is key to automating customer care, order processing and maintaining customer relationships.
  • Machine Learning (ML) – allows robots to learn from data and improve their performance over time, which is crucial for tasks requiring adaptivity or decision making.
  • Optical Character Recognition (OCR) – enables the reading of information from unstructured digitized documents such as PDFs and images.

Meeting the challenges of the modern market

In a rapidly evolving market environment, the combination of RPA and generative AI delivers precision automation, as well as creative innovation, providing a competitive advantage when deployed. Ignoring this technological symbiosis means missing an opportunity. “Now is the right time to let RPA and generative AI technologies collaborate and achieve improved results,” concludes Viktória Lukáčová Bracjunová.

Czech Companies Lose Hundreds of Thousands of Czech Crowns Annually Due to the Absence of Energy Management

Czech companies monitor their energy consumption, but not meticulously enough. This results in unnecessarily high operating costs month after month. Although half of the respondents in Soitron’s survey said that energy prices significantly affect their margins, with another 5% admitting severe financial struggles with energy costs, only few companies diligently track their consumption. Yet, simply monitoring consumption and breaking down costs by specific energy sources can eliminate inefficiencies and save up to a tenth of their operating costs.

High inflation and unclear future prospects have plagued Czech companies for over a year. One of the most effective solutions for saving costs is to monitor their energy consumption. “Just observing consumption alone motivates companies to think about how to avoid unnecessary losses of expensive energy,” says Martin Hummel, IoT specialist and product manager at Soitron, a leading IoT integrator operating in the Czech Republic and Slovakia.

It would suffice to switch off machines that are momentarily not in use and consider the choice of new equipment in the future. Or use automation to automatically help reduce energy consumption.

A Soitron survey with 86 participating companies reveals that almost every company (92%) tracks their energy expenditures. Those not monitoring consumption either claim they do not need this data or lack the necessary equipment to collect it.

As for companies monitoring their energy consumption, their approach varies. Some companies primarily look at data for the whole company (47%); more than a third (35%) monitor consumption through secondary metering at the building or individual hall level; and only 9% monitor through secondary metering at the level of individual lines and machines. The remaining respondents do not concern themselves with the detailed breakdown. Depending on the size and type of the company, they are unnecessarily losing up to hundreds of thousands of Czech crowns per year. These inadequately-spent funds could be better used for further development, bringing about both savings and the simplification of day-to-day work.

This problem can be solved by implementing an energy management system, commonly used in modern operations to manage and optimise energy consumption. The first steps to monitoring and managing energy consumption are inexpensive. The metering system monitors the consumption of the technologies, heating, cooling or lighting in real time. It is thus able to quickly identify specific points of inefficiencies, such as the lights turned on in an empty building. In several implementations, companies have reduced their consumption by 5 to 10% just by obtaining this data alone. So, their investment was paid back in a matter of weeks,explains Hummel.

Graf of energy monitoring

Without input data, optimisation is difficult

The key factor for most companies is to save on production costs. For 60% of respondents, the amount of total energy consumption is the most important, possibly in combination with the price of energy on the spot market, and possible penalties for exceeding their 15 min maxima. Companies follow their energy consumption and energy prices and optimise their production processes based on these factors. Production profitability, i.e. the ability to produce a product for less than what it sells for, is an important indicator for one-third of respondents (34%), suggesting that everything is working as it should.

However, the majority of companies (74%) do not use any specific system to collect and analyse energy consumption data. Those that do use data collection claim that having such data is useful. “Any software that enables accurate data analysis is worth it,” declares Hummel. Five percent of the respondents who said they could not manage without external help also admitted that they had no idea what percentage of the total cost of a product is the energy cost. Another five percent of companies was unable to assess the impact of high energy prices on the profitability of their production. To such companies, data collection would provide the easiest help.

Align your data centre with automation

We have been using automation for decades, and it has permeated virtually all areas of business. It’s not surprising that the operation of something as complex as a data centre is being automated as well. Even though this is the realm of ones and zeros, automation brings the same benefits as in industry: speeding up all operations, eliminating routine manual activities, increasing safety, and solving shortages of specialists.

Why introduce automation into a system where processes run seemingly without human intervention? A layman may think that as long as everything is running smoothly, data centres do not require manual interventions and the human factor usually comes into play only in the event of a crisis, incident, or failure.

However, the philosophy and architecture of today’s IT world has changed so much in recent years that it’s almost impossible to do without automation. This is due to the cloud, the use of SaaS services, and, above all, new approaches to the development and deployment of omnipresent applications. Any company that is serious about digitalization must implement automated processes because it simply won’t succeed in business with the old infrastructure.

Data centres are automated by software that provides centralized access to the configuration of most resources. This lets these technologies and resources be controlled and managed easily, and often without the knowledge of all the technical details. Accessing the required services is much easier and faster. Unlike in the past, when new application request handling typically took days to weeks, now the same task is likely to take just a few minutes.

Modern applications need automation

Companies are most often “forced” to use automation tools when they transition to the cloud or hybrid environments, want to quickly develop and deploy applications, or need to speed up the implementation of new environments and reduce dependence on human resources.

The time when applications were developed and tested over a period of weeks, or even months, is irretrievably gone. Today’s applications are built from many smaller parts (microservices), each of which can be independently changed or upgraded. In addition, development, testing, and deployment require an automated infrastructure that allows for rapid changes and modifications.

Another common scenario when automation is required is a company’s transition to the cloud, or, even more often, to a hybrid environment combining the cloud and an on-premise infrastructure. While clouds are automation ready, your own data centre environment requires automation and then these two environments need to be interconnected.

Finally, the deployment of automation is also motivated by the desire to accelerate the implementation of new environments and resolve human resource shortages. The shortage of top IT experts is a widespread phenomenon, so strategically it is useful for data centres to reduce their dependency on staff leaving the company or making mistakes.

Infrastructure as code

Soitron approaches automation by building automation platforms which allow automation procedures to be defined for any given datacentre environment (cloud, hybrid, or on-premise). This is actually a principle that is like the one used to create source code for applications. Rewriting the entire infrastructure into a code (Infrastructure as Code) brings many advantages over manually creating the environment.

When the infrastructure is defined by scripts, the same environment is recreated each time it is deployed. This approach is most beneficial when the customer wants to centrally manage their environment and dynamically change individual application environments. With scripts it is easy to make changes across the entire infrastructure. Let’s say a company determines that all its databases should be backed up ten times a day and that each database can be accessed by predefined administrators. Then they can set up a new branch and the conditions change. In this case, the company can simply use scripts to increase the number of daily backups or add more administrators as necessary.

Documentation serves as an infrastructure backup

Another advantage of automation is the documentation. The environment is defined by code, which makes it easy to see what is deployed and how everything is configured. The deployment can be replicated at any time (serving as an infrastructure backup) such as in the case of a failure or when you need to create a parallel testing environment.

“Documentation also solves the problem of staff substitutability or dependency because the moment you have a code-defined infrastructure, a much wider range of people can work with it rather than a single IT engineer who happens to remember how the system was originally setup,” explains Zbyszek Lugsch, a business development director at Soitron.

An environment standardized by scripts allows other people to work with such an environment and enter machine- or application-specific parameters. Scripting also allows the same security standards to be applied across the system, including when installing new servers.

Automation is a tailor-made project

To a large extent, data centre automation is always a unique project, as every company uses slightly different technologies and is at a different stage of automation deployment. The course of action is defined by the company’s intention, such as the need to create a suitable environment for developers or the desire to convert an on-premise centre to a hybrid architecture. One of the early steps is a proposal for replacement or extension of the data centre’s existing functions, components, or layers, followed by the implementation of the automation platform and the creation of scripts. The actual deployment takes place by gradually migrating existing systems to the new environment until the “old” environment can be shut down.

Starting to work in an environment that enables rapid application development and deployment will also reduce staffing requirements and offer superior security.

A #Cisco ExpertTip by Martin Divis, a systems engineer at Cisco: Deploy ICO

One of the most comprehensive automation tools is Cisco’s Intersight Cloud Orchestrator (ICO). ICO is a Software as a Service (SaaS) platform that allows managing a wide range of technologies such as servers, network devices, data storage, and more across your entire enterprise infrastructure.

The main advantage of ICO is its multi-domain and multi-vendor approach, which allows the tool to be used regardless of the current implementation. ICO includes an extensive library of predefined tasks that can be deployed for recurring tasks or processes in managed infrastructures. ICO works in a low code/no code design and allows drag-and-drop task setup and running. It is designed with a maximum emphasis on ease of use, making automation operations accessible to a wide range of IT team members.

PV management systems are becoming a ticking time bomb among publicly accessible online control systems

Local companies have started to upgrade the protection of their control systems. According to data from Soitron’s Void Security Operations Centre (SOC), the number of devices exposed and visible on the internet has dropped by 21% since the beginning of 2022; however, the current situation is still not desirable. In particular, the control systems of industrial and domestic photovoltaic (PV) power stations are becoming an alarming danger.

In a year-on-year comparison (01/2022 vs 01/2023), the total number of publicly available Industrial Control Systems (ICS) with at least one of the eight monitored protocols – such as Moxa, Modbus, and Tridium – has been reduced. This was the finding of Soitron’s team of Void SOC analysts. “This is a slight improvement, but in absolute figures it still means there are more than 1,500 vulnerable systems in various organizations, which is still a significant risk. And we’re only talking about the eight most commonly used protocol types. If we expand the set to a few dozen types of protocols, we can find more than 2,300 vulnerable systems,” says Martin Lohnert, the director of the Void SOC. He alsoadds that it would be great if the downward trend was due to the increasing level of security of these industrial systems, which would make them disappear from this report.

Often, however, the opposite is true. An ICS disappears from the internet only after it has been exploited by attackers and has stopped functioning. When bringing it back to life, operators are more careful not to repeat the original mistakes. Unfortunately, they are often just a response to the damage already done.

The PV phenomenon has given rise to a new problem

Despite the overall reduction and a slight improvement in the situation, new vulnerable systems are still being added. “Last year these were mostly control systems for photovoltaic power stations. And that includes both industrial stations with hundreds of installed solar panels as well as home installations,” says Lohnert. In his view, it should be in the interest of operators to make sure that their equipment is protected from internet security threats. Essential steps include changing default login credentials, restricting access, regularly updating firmware, and monitoring for misuse or login attempts.

The potential problem lies in the deactivation of this system, losses due to solar power generation disruption, and the cost of repair. At the same time, a successful penetration into a poorly secured industrial system can allow an attacker, often undetected, to attack more important systems essential for a company’s operation. This may cause the organization to stop functioning completely, with losses running into millions of Czech crowns.

The Czech Republic is lagging behind, but there is a solution

Although Soitron’s Void SOC has recently seen a positive trend, it is very likely that the number of potential risks will increase. In the Czech Republic and Slovakia, the major digitalization of industry is yet to take place. Both countries still lag far behind other EU countries in many aspects of digital transformation. Out of the twenty-seven EU countries, Slovakia is ranked 24th and the Czech Republic is ranked 20th in the Digital Economy and Society Index (DESI), which has been tracked by the European Commission since 2014. With the progress of digitalization, new technologies are gradually being introduced, such as production control systems, various sensors, programmable logic elements, and human-machine interfaces. If care is not taken to secure them, the figures in this survey will rise.

It is also clear that in order for digitalization to significantly shift the current state of cybersecurity, many industrial enterprises will need to invest in the tools, technology, and specialists to operate them; however, in most organizations (especially SMEs) this is not possible. “The most common reasons are insufficient funding and a general lack of qualified cybersecurity professionals. We therefore expect that the situation is likely to worsen before there is more awareness and a shift for the better. The safest solution is to use services of experts who will fully take care of the security of your business systems and infrastructure. Our monitoring centre can have everything under control 24 hours a day, 365 days a year,” adds Lohnert.

The NIS2 Directive can increase the security level of organizations in the Czech Republic

The European NIS2 (Network and Information Security Directive 2) can make things more difficult for Czech organizations, but it can also help them solve their cybersecurity problems. This is particularly the case for those organizations that have not yet addressed this serious threat until now or could not justify the necessary budget for sufficiently qualified staff.

The NIS2 Directive aims to make the EU’s digital infrastructure more resilient to cyber attacks and improve coordination and incident response capabilities. “Many entities in the Czech Republic and elsewhere are not taking these matters seriously enough. This is due to the fact that there is a shortage of IT experts – let alone cybersecurity experts – on the market. Since the entities affected by the new directive will be obliged to ensure that their IT networks and information systems are sufficiently protected against cyber threats, this problem may become even worse,” says Petr Kocmich, Global Cyber Security Delivery Manager at Soitron.

What the directive changes

The institutions concerned must implement measures to prevent cyber attacks and threats, such as performing regular software updates, securing network devices, and providing protection against phishing attacks. In addition, they must prepare contingency plans for cyber incidents and establish mechanisms to deal with them quickly and effectively.

Major incidents must be reported within twenty-four hours of becoming aware of the incident and cooperation with national security authorities will be required. Any company that fails to comply with these requirements may be subject to fines and other sanctions.

Dvě mouchy jednou ranou

It would be great if the NIS2 Directive could help end the shortage of cyber security experts; however, this is unlikely, and, at first glance, it might even seem to exacerbate the problem. Having said that, the new regulation is an excellent opportunity to make organizations more secure. External cybersecurity service providers can help. They have sufficient capabilities that organizations are lacking. “Specialized companies focus on providing these services and can help entities implement security measures and risk management as a complete package, i.e. a turnkey service or a solution delivery including support and compliance with the NIS2 Directive,” says Kocmich.

notebook

Specialized companies can help organizations solve both problems in ensuring compliance with the new requirement and improving previously incomplete security of their IT infrastructure and information systems; however, even if organizations use the services of such providers, the responsibility is still theirs.

They should choose their service provider cautiously and ensure that they are sufficiently qualified, experienced, and certified. It is also important to make sure the tasks are properly assigned and that the contractor’s performance is monitored. To ensure the efficiency and effectiveness of the model, the roles and responsibilities should be clearly defined in the contract between the organization and the cybersecurity service provider. It should be understood that the quality of the service delivered often reflects the quality and management capabilities of the provider.

Who the NIS2 Directive applies to and from when

The directive will mean greater obligations for companies in the Czech Republic in relation to network and information system cybersecurity and protection. However, it will also improve protection and resilience against cyber threats and cooperation between European countries. Last but not least, meeting the requirements of the NIS2 Directive can help organizations gain the trust of their customers and partners, who will be more satisfied with the protection of their data and information. Overall, the directive could help entities to improve their security practices and minimize risks.

The NIS2 Directive applies to electricity producers, healthcare providers, electronic communications services, and over sixty other services categorized into eighteen sectors. In the Czech Republic, the directive will start to apply on 16 October 2024 and will cover up to 15,000 entities – these are medium and large companies with over fifty employees and companies with an annual turnover of over CZK 250 million. Although the NIS2 Directive will only apply to organizations that meet the defined criteria, and others are not directly obliged to comply with the requirements, it is worth considering using it as a recommendation for improving general cybersecurity in other companies.

Opportunities for other entities

“It is estimated that up to 70% of domestic organizations have a problem with their cybersecurity. Smaller and medium-sized enterprises in particular do not have sufficiently secure IT systems and do not comply with basic security measures,” says Kocmich. Common problems include benevolent user access rights, a lack of two/multi-factor authentication in combination with weak passwords (including those of administrators), mismanagement and the decentralization of user identities, outdated and unpatched hardware and software containing vulnerabilities, missing network segmentation, weak or missing email and Internet access protection, inadequate perimeter protection, low visibility into network traffic, low or missing endpoint security, a lack of central log management, and inadequate employee training. “Cybersecurity is a big issue for many companies in the country. They can become easy targets for attackers. The NIS2 Directive should help raise awareness and protection against cyber threats,” adds Kocmich.

For more information on obligations under the NIS2 Directive, see the dedicated website (http://nis2.nukib.cz) of the National Cyber and Information Security Bureau (NCISB).

The effect of misconfigurations on business

International and Czech organizations continue to move their IT systems and data to the cloud environment. However, moving to the cloud is not just about migrating data. It is also about changing the access of system administrators, and this often brings new challenges and configuration procedures. During the migration process, it is easy for something not to be taken care of, set up, or configured properly in accordance with best practices. This leads to “misconfigurations”. As a result, companies are unnecessarily exposed to more attacks than before and cannot adequately defend themselves against them.

Both cloud and on-premise solutions offer clear benefits and address specific challenges and needs of organizations. However, taking the existing fully local IT infrastructure and moving it to the cloud without making necessary changes (the so called “Lift & Shift” approach) is a common mistake. Both on-premise and cloud environments have their pros and cons, which is why customers often use hybrid environments. The reason for this solution is usually a legal requirement (due to data sensitivity and where this data might be physically stored), the architecture, and the complexity of legacy applications that can be made compatible with the cloud only with disproportionate investment and efforts or not at all.

Forcing it is not acceptable

Migrating to the cloud can help organizations reduce IT costs (if cloud resources are used appropriately) and have more computing power. More importantly, they can have more scalable performance available at any time, increase storage flexibility, and simplify and accelerate the deployment of systems and applications, while accessing data and systems from anywhere, anytime, 365 days a year.

However, as far as cyber security is concerned, deploying the cloud can increase the likelihood of an organization being attacked by malicious actors. If the “let’s go to the cloud” decision is made, it should be taken with due responsibility. First and foremost, it is important to understand that the cloud as such is a shared responsibility between the cloud service provider and the customer, so the cloud is never a panacea. We can talk about choosing the right model (IaaS/PaaS/SaaS), but if we want to relieve the inhouse IT/SEC team, the right way should be the PaaS and SaaS model, where most of the responsibility falls on the cloud service provider. In addition, the act of moving to the cloud should also be seen as an opportunity to move to a modern and secure corporate infrastructure solution. At the same time, we must not forget to involve the security department, which should be a fundamental and integral part of any project like this.

Unfortunately, most cloud migrations often mean just forcing and moving the existing system in its current form. This means that companies should ideally start utilizing native cloud resources, which often requires the replacement of existing monolithic applications. Otherwise, they gain nothing by simply moving their systems and data to the cloud, and it will most likely cost them more than their original on-premise solution.

cloud

Misconfiguration playing the main part

Today’s on-premise solutions are relatively well-equipped with security monitoring and auditing tools in terms of established and proven standards, but this is not necessarily true for migration to the cloud.  Cloud misconfigurations are vulnerabilities waiting to be noticed by attackers. These are gateways through which it is possible to infiltrate the cloud infrastructure and, thanks to the interconnection and hybrid mode, also laterally affect the existing on-premise infrastructure. This allows attackers to exfiltrate data, including access data, telemetry data of machines in the OT environment, health records, and personal data, and then do something like deploy some ransomware.

According to experts, an average enterprise has hundreds of misconfigurations every year, but their IT departments are unaware of the vast majority of them. All misconfigurations are the result of human error and missing cloud configuration health check tools (e.g. Cloud Security Posture Management – CSPM).

The impact of cloud misconfiguration on system security

When migrating systems, what often happens is that selected services that had only been available internally within the on-premise solution are exposed to the public online space after the migration without any filtering and blocking of external network traffic. Many companies suffer from this, including critical infrastructure operators. It may therefore happen that a console for controlling industrial control systems becomes publicly available online. We recently detected an ICS console of a production and assembly line control system available online without any authentication required. What we often see are services containing exploitable vulnerabilities without any additional security. The security may have been deployed in the on-premise solution but has not been implemented in the cloud (e.g. a missing Web Application Firewall). Quite common are services with default credentials and services used for the remote management of customers’ internal systems or even freely accessible sensitive data.

This is why there are dozens to hundreds of incidents per month, as seen in the statistics of our monitoring centre. Security misconfigurations become easy targets for attackers who know that they are present in almost every enterprise. This neglect can have disastrous consequences. It helps attackers to reconnoitre and infiltrate customer environments, create persistent links for remote access, take control of systems, and exfiltrate data and login credentials, which are then disclosed or sold to be used for further attacks. Alternatively, it opens the door to lateral ransomware or cryptojacking attacks, in which cloud computing resources are exploited to support cryptojacking activities.

Steps to minimize the risks of misconfigurations

Configuration management, and especially monitoring, requires a multifaceted approach.

Organizations should implement well-established security practices, such as regular Cloud Security Posture Management assessments, to help detect a range of security defects and misconfigurations. It is important to follow the Least-Privilege principle and to continuously monitor and audit cloud systems.

Maintaining sufficient visibility of cloud assets should be a priority, just as it is in on-premise solutions. Strong identity and access management helps scale permissions to ensure the right level of access to cloud services.

The identification and prevention of various misconfigurations during cloud migration help enterprises eliminate major security issues. Specialized companies can help by guiding the organization through the entire process and setting everything up correctly.

Preventing the OT Network from Becoming Vulnerable

The security of industrial networks is nowadays a key concern for companies and therefore they pay more attention to this issue. They are well aware that cyber attacks on operational technology (OT), sometimes also referred to as industrial control systems (ICS), are becoming more frequent and sophisticated. This increases not only the risk of attacks and data loss, but more importantly, production downtime, which can cost companies tens of millions of Czech crowns.

In the past, OT was primarily focused on controlling and automating industrial technologies and processes. The systems were designed to process large amounts of data in real time, with an emphasis on reliability and fault tolerance. It is therefore increasingly important to integrate OT with IT to create a secure link between the two systems. This enables industrial companies to better manage their processes and improve productivity.

Relevance of the Purdue model

Back in the 1990 a model named the Purdue model (also known as PERA) was developed in the USA, and it is still considered one of the most widely used architectural models in operational technology.

The Purdue model provides a comprehensive framework for industrial process control and automation, allowing functions and responsibilities to be segmented into various levels of control and automation. Cyber threats are constantly evolving, and the interconnection of factory IT and OT systems plays into the hands of attackers. Many manufacturing and industrial companies have become targets of massive ransomware attacks precisely due to the lack of proper segmentation. That is why it is essential to segment industrial networks and constantly monitor them, as well as update security measures in response to the latest threats and trends in cyber security.

The most common threats in the industry include:

  • Compromising and the subsequent pivoting of an endpoint in an industrial network;
  • Fraudulent credentials and abuse of authorised remote access;
  • Attack on wireless connections;
  • Gaining physical access to the production network and equipment;
  • Installing a foreign physical component to obtain or modify transmitted data.

Five basic security principles and pillars

There are several basic security principles and pillars that are effective and crucial for ensuring cyber security in the industry. It is not surprising that due to the fusion of the IT world into the OT world, these pillars are adopted from the IT world (but supplemented with OT specifics – such as proprietary ICS protocols, etc.) The main principles and pillars include:

Visibility – you can’t protect what you can’t see. That is why it is advisable to keep an up-to-date list of all devices connected to the network and perform a behavioural analysis of their communications. Another prerequisite is the regular scanning of the status and versions (OT/IT) of devices. Any discovered vulnerabilities must be patched.

Segmentation – another pillar is the isolation, filtering, and inspection of network traffic. It requires NGFW (OT) deployment for the proper (micro)segmentation and filtering of network traffic. IPS/IDS (OT) and virtual patching should be used. Another aspect that should not be overlooked is securing of email and Internet traffic, and the screening of unknown files.

Endpoints –EDR/XDR solutions allow you to collect information about what is happening on devices (including USB device management).

Access Management –thecentralized management of user/machine identities is also recommended, but with strict separation of industry and corporate identities. Jump Servers (+MFA) should be used for network and device management and systems designed for network access control (NAC, 802.1x) should be implemented.

Auditing, backups, compliance, IRP, risk management, SIEM/SOC shouldbe centralized, and their respective security logs should be evaluated. A robust backup strategy allows you to prepare for unexpected situations. Regular training of administrators and staff should not be omitted. Definitely, do not underestimate risk analysis, which can be addressed by implementing or at least taking inspiration from the IEC 62443 standard. And, of course, require contractors to comply with security policies.

Expert approach

Industrial control systems combine a lot of complex hardware and software, often unfortunately very outdated. To maintain the highest level of cyber security readiness in OT, manufacturing companies must be both proactive and reactive in implementing protection. Specialized teams can help you with this. So don’t hesitate, schedule a consultation, and find out where your company stands.

The ChatGPT AI chatbot could be a gamechanger in the cybersecurity, experts say

From the surgical debugging of programming code, to instantly writing an entire block of functional code, and the stopping of cybercriminals, OpenAI’s newly launched popular ChatGPT AI chatbot is changing the game and its capabilities are virtually limitless. And not just in IT.

It has only been around since 30 November last year, but in just a few months it has already been discovered by millions of people around the world. We are talking about an artificial intelligence platform able to answer any question and help with various problems. ChatGPT can answer any general question; write letters, poems, and articles; and even debug and write programming code.

How the ChatGPT AI robot works

This conversational chatbot, backed also by the well-known visionary Elon Musk, who has been involved in AI for years, was developed by OpenAI. ChatGPT is designed to interact with humans in an entertaining way and answer their questions using natural language, which has made it an instant hit among professionals as well as the general public. It works by analysing huge amounts of text. Most of the texts were sourced from the internet, but the chatbot is currently not connected online, which means it won’t tell you the result of yesterday’s Sparta vs Pardubice game. It sees the interaction with the user in context, and hence it can tailor its response to be relevant to the situation. In this way, everyone can learn something.

Experts even suggest that the AI chatbot has the potential to replace the Google search function in the future: “Another very promising feature is its ability to write programming code in any user-selected programming language. This helps developers work on and debug their code, and it helps experts secure their systems,” points out Petr Kocmich, the Global Cyber Security Delivery Manager at Soitron.

How ChatGPT can be used by developers

Today, writing code is not a problem for ChatGPT. What is more, it is absolutely free. On the other hand – at least for now – it is advisable to avoid having the chatbot generate complete codes, especially those that are linked to other codes. The current form of the platform is still in the early stages of development, so it is naive for programmers to expect it to do all the work for them. Having said that, coders and developers can still find the tool useful.

They can use it to find bugs in the code they have written. And they can also finetune a problematic code they had spent long hours writing. ChatGPT can help find a bug or a potential problem, and it can offer a possible solution to end those sleepless nights. Its ample computing power saves hours of debugging work and can even help develop source code to test the entire IT infrastructure.

There are some risks

Without much exaggeration, it could be said that ChatGPT can turn anyone into a cybercriminal, making it easier to carry out a ransomware, phishing, or malware attack. It may seem that the AI robot just needs to be asked to “generate the code for a ransomware attack” and then you just wait for the result. But, as Kocmich points out, it’s fortunately not that easy: “Conversations are regularly checked by AI trainers, and responses to this type of query, as well as other potentially harmful queries, are restricted by ChatGPT. Actually, the chatbot responds by saying that it does not support any illegal activities.”

On the other hand, even if it evaluates a question to be potentially harmful and thus refuses to give an answer, this does not necessarily mean that people can’t get to the answer some other way. “The problem with these safeguards is that they rely on the AI recognizing that the user is trying to generate malicious code; however, the true intent may be hidden by rephrasing the question or breaking it into multiple steps,” says Kocmich. Moreover, nobody can guarantee that some other similar AI robot would not refuse to answer such a question.

What to think about ChatGPT


As is often the case, there are two sides to every coin. While AI bots can be exploited by cybercriminals, they can also be used to defend against them. In the meantime, coders could gradually turn into “poets”. They would tell the AI chatbot that they need to write such-and-such a code that does this and doesn’t do that, or describe the same in a case study, and then they just wait for the AI bot to generate the code.

“Already, ChatGPT is being used by security teams around the world for defensive purposes such as code testing, reducing the potential for cyber-attacks by increasing the current level of security in organizations, and training – such as for increasing security awareness,” says Kocmich, adding in the same breath that we should always bear in mind that no tool or software is inherently bad until it is misused.

There is an abundance of data in industry, but few enterprises know how to make use of it.

Most likely every manager of an industrial enterprise wishes to improve production quality, optimize production line operation and energy consumption, and anticipate machine and technology failures before they cause costly downtime. This goal can be achieved by analysing relevant data from machines and technologies.

Undoubtedly, working with collected data has a great potential to boost productivity and quality and optimize costs; however, these benefits can only be achieved by enterprises that can effectively collect and store data for further analysis. For this, they need a solution with a corresponding architecture at the IT level (data storage and data processing), the operational technology (OT) level (data capture), and the level of data transfer between OT and IT.

The data I (do not) have

Most industrial enterprises already have some experience with data collection. They usually record some production or assembly process data, such as the start and duration of the production cycle and the materials or semi-products used in the process. This gives enterprises an accurate overview of their stock of raw materials and parts and of the quantity of products produced, but they usually do not monitor production parameters that may affect production quality or continuity. This includes sensing physical parameters during production or assembly – such as temperature, pressure, flow, and vibration. An analysis of these parameters allows for production optimization and the identification of the need for maintenance intervention, thus preventing unexpected downtime.

Impediments to data collection

Collecting this necessary data is not always easy, and this may be due to various reasons: for example, some older machines and technologies may not be equipped to provide the necessary data. Newer machines may be able to do this, but often only in a proprietary format owned by their manufacturer. In other cases, data collected from production and assembly processes is only retained for the duration of the process and is no longer available for subsequent analysis. This results in disconnected islands of temporarily available data, meaning that production managers are unable to get a comprehensive overview of the production process and are thus unable to analyse the source of problems. They are also unable to identify energy losses or consumption peaks which would allow them to do things like avoid penalties for overdrawing or optimize contracts with energy suppliers.

Another obstacle may be an outdated IT infrastructure. Outdated databases are unable to store large volumes of production data, and they do not allow for the quick and efficient querying and analytical processing of such data. If an enterprise wants to have data available for production quality improvement, predictive maintenance, and energy management, it needs to have a universal technology architecture that can collect and store data from different systems, machines, and technologies. This is a prerequisite for later data analysis and evaluation as a basis for management decision-making.

The proper architecture

To achieve the above, enterprises deploy Industrial IoT (IIoT) solutions. In many cases, it is necessary to connect machines (OT) with the world of data storage and processing (IT). An example of such a connection is the use of OPC UA, which stands for Open Platform Communication Unified Architecture. It describes the communication, communication protocol, and architecture that has been designed for industrial automation. An OPC UA server, such as KEPserverEX, functions as a communication hub between the machines in operation and the IT components of the solution, enabling proprietary systems and devices from different manufacturers to communicate. This then leads to the deployment of modern industrial Internet of Things solutions (IT + industrial OT = IIoT).

Such solutions allow data collection from virtually any system, sensor, or device. The data is then routed to a database, such as Elasticsearch, which is capable of long-term data storage (rather than just storing it for the duration of the production cycle) and, most importantly, which provides scope for quick and efficient querying. This creates the prerequisites for subsequent analytical processing and data visualization, such as in the Grafana open-source tool.

The road to artificial intelligence

For most industrial enterprises, an open architecture based on the above solutions, designed by Soitron specialists, is a generational leap forward in terms of their ability to collect and use data. Moreover, the ability to replace hardware components with software and the use of open-source software not requiring any paid commercial licences make this a highly cost-effective solution.

The single universal architecture can be used to collect and analyse production process data (such as temperature, pressure, liquid and gas flow rates, vibration, and power consumption) which may have a major impact on product quality and durability. At the same time, it can be used to predict machine failures; make maintenance more efficient; optimize energy, gas, compressed air, and water consumption; improve industrial safety; and make the system ready for any future application using machine learning and artificial intelligence technologies.

The current state of digitalization and IoT in industrial enterprises in the Czech Republic and Slovakia

It has been a decade since the Industry 4.0 concept was first introduced, yet only 15% of Czech and 10% of Slovak companies have implemented its principles. What is alarming is that 20% of Czech and 30% of Slovak companies have not even started to think about it, and that 5% of Czech and 8% of Slovak companies had considered these options but decided not to change anything.

Data is most often used for production management

The basic principles of the so-called Fourth Industrial Revolution – including digitalization and IoT – were first formulated in 2011. According to this idea, “smart factories” would emerge and utilize cyber-physical systems. Soitron, an integrator of innovative solutions and IoT technologies, conducted a survey to find out about the current stage of development among manufacturing companies in the Czech Republic and Slovakia.

The collected data shows that 58% of Czech and 51% of Slovak companies have either already implemented (or are currently implementing) elements of digitalization, IoT technology, and Industry 4.0 principles.

“In both markets, data from factories is most often collected automatically into a centralized system. Two-thirds of the surveyed companies do so. The second most common method is manual data collection whereby a technician walks around the machines and records data into a paper notebook or a computer,” explains Martin Hummel, an IoT solutions specialist and product manager at Soitron.

For more than three-quarters of companies (83% of Czech ones and 77% of Slovak ones), the main focus is on quantitative data used to manage productivity and manufacturing efficiency. Some Czech companies (32%) also use data for quality control and reject rate management, and some Slovak companies (33%) also use data for maintenance management and failure reduction.

Shortcomings in the way data is used or processed are perceived more often by Slovak companies (70%), with the Czech figure being a little more than half of all respondents (55%). Respondents in both countries most often mentioned that they saw room for improvement in terms of automation and digitalization and in having a centralized system (27% [CZ] vs 33% [SK]). Slovak respondents would also like to see more detailed and extensive data analyses and a general improvement in existing systems. Czech respondents, on the other hand, saw deficiencies in the speed of availability of outputs, i.e. the unavailability of real-time monitoring; they would also like to see better data collection and processing.

Up to 25% of Czech companies experience manufacturing downtimes on a daily basis

Maintenance planning in both countries (85% in the Czech Republic and 66% in Slovakia) is predominantly based on the recommendations of technology manufacturers and vendors. Then it is based on the judgment of the staff in charge (75% in the Czech Republic vs 44% in Slovakia), and only after that is it based on collecting and evaluating data from manufacturing, machines, and other sources.  The future, however, lies primarily in the automated collection of data from sensors installed on manufacturing and assembly machines and equipment.

“For factories, the collected data is the most important and valuable source of information. Once processed by a data analysis or by machine learning procedures, this data can help companies predict impending failures and breakdowns. This enables operation specialists to make timely and correct decisions and to intervene as necessary and thus save significant costs,” adds Hummel.

If we look at downtime frequency, the survey shows that the Czech enterprises generally face slightly fewer unexpected downtimes (25%) than the Slovak ones (31%). The average cost of unexpected manufacturing downtime is more or less the same in both countries – it is 1.7 million Czech crowns in the Czech Republic and the equivalent of 1.5 million Czech crowns in Slovakia. It is important to realize that sometimes a malfunction causes the production line to stop and thus halt the whole production. A malfunction may take several hours to repair, which means that downtimes represent a significant financial loss to any company. The answer to this is predictive maintenance based on collected data that can provide an early warning of an impending failure.

Operational data for maintenance intervention predictions saves resources

The digitalization of production processes is still a major challenge for most Czech and Slovak industrial enterprises. “By deploying modern industrial IoT solutions, it is possible to continuously monitor and analyse operational parameters, production quality, and the manufacturing environment, and thus detect problems early and – in many cases – even predict impending failures and downtimes. Investing in IoT solutions in manufacturing clearly pays off,” says Hummel in conclusion.

Start metering your energy consumption and you will save on one of the biggest costs

Trading, manufacturing, and service companies as well as entities such as hospitals and public organisations tend to perceive energy as a necessary cost item which is constantly growing.

They do not pay much attention to it, because some of their managers believe that continuous consumption monitoring, identifying inefficiencies, and adapting their power consumption to the “reserved capacity” contracted with their energy distributor is too complicated, if not impossible.

And yet for many businesses, especially small and medium-sized enterprises, energy is one of the largest cost items. The most energy-intensive sectors include food, pulp and paper, and the chemical industry as well as services. Indeed, the industrial and services sectors account for almost two-fifths of the EU’s total energy consumption (including households).

It is often said that if you cannot measure it, you cannot manage and optimise it. Energy is a perfect example of this. Only a detailed overview of the largest expenditures or leaks and an understanding of when, where, and how they occur can make subsequent remedial action possible.

The good news is that current technologies based on the Internet of Things (IoT) make it relatively simple to monitor energy consumption in great detail. The basis for this is the use of smart meters providing real-time data on the consumption of energy such as electricity or water. Based on our experience at Soitron, our customers achieve energy cost savings by using this technology in the three following ways:

  1. Identification of errors and energy wastage

    The first way to use metering to reduce energy consumption, and thus also your costs and carbon footprint, is to compare invoices from the energy provider against your own metering. This can reveal energy losses, such as water pipe leaks in larger industrial facilities.

    More detailed consumption metering, monitoring, and comparisons also tend to reveal unnecessary waste, such as when a specific facility has a significantly higher consumption than other comparable ones. A detailed consumption overview has helped some companies realise that they kept their air conditioning running unnecessarily, even on weekends or outside work hours. In our experience, organisations can save up to five percent of their energy costs in the first year after metering implementation.

  2. Current consumption and energy supply contract optimisation

    If a company is unable to monitor their electricity consumption over time and actively manage individual energy consuming systems, they can easily repeatedly exceed their reserved capacity. In such cases, distribution companies charge penalties for overconsumption, often amounting to several thousand euros.

    Some organisations deal with this problem by increasing their reserved capacity. However, such a solution may turn out to be even more costly as they have to pay higher fixed charges for a capacity that they seldom use.

    A smarter solution is to meter the energy consumption continuously and to intervene when necessary by temporarily turning down room chilling or reconfiguring the operation of any large appliances that do not necessarily have to all run at the same time. A similar approach can be used to avoid power outages if the infrastructure is hitting its technology ceiling and the distributor is unable to reserve a higher capacity.

  3. Feedback for maintenance and changes in behaviour

    Detailed reports on energy consumption down to the level of individual workplaces or even machines may also have other positive effects. For instance, they provide more insight into the technical condition and the intensity of use of machines. If the consumption is too high, it may indicate a fault, but if it is too low it may indicate that the machine is not being used enough.

    Finally, if the metering results are well communicated, they also help to change employee behaviour, which can lead to further savings and higher efficiency.

    Conclusion
    Of course, there are several ways to achieve energy savings. A company can change their energy supplier, try to get better conditions by renegotiating the energy supply contract, and invest in more energy-efficient machines and equipment.

    However, consumption metering is very affordable and can be implemented relatively quickly, which is why it is proving to be one of the most effective ways to reduce energy costs.
    Contact our IoT team and find out how much your organisation can save on energy.