Sensor AI has a role not only in industry, but also in sectors such as medicine and consumer electronics

Corporate artificial intelligence and robotics are no longer just futuristic concepts but are becoming an integral part of everyday business operations. They enable companies to improve efficiency, productivity, and responsiveness to change. They also help with product innovation. One of the key aspects of this integration is the use of sensor AI, which allows for data collection and analysis using a variety of sensors and devices.

Let’s take a look at some examples of how sensory AI can transform various industries and innovate businesses. The following examples illustrate practices already in use today:

Industrial automation: In industrial automation, sensor AI is used to monitor and control manufacturing processes. Sensors can monitor essential parameters such as temperature, pressure, or humidity, but also detect microscopic changes in the environment that could signal potential issues. For example, sensors detecting changes in air pressure can warn of impending equipment failure, allowing maintenance to be carried out before the problem becomes serious.

Medicine: Sensors enable the monitoring of heart rate, blood pressure or glucose levels. This data can be analysed by artificial intelligence, for example to diagnose and monitor health conditions or to predict the future course of a disease or a patient’s response to treatment. Sensory AI detects patterns of changes in blood pressure that may predict when the next hypertensive crisis will occur and warn the doctor or patient well in advance.

Autonomous vehicles: Sensors are crucial for collecting data about the surrounding environment. In addition to traditional sensors such as lidar, radar and cameras, modern vehicles often use other advanced sensors such as ultrasonic sensors to detect obstacles in the vicinity of the vehicle or sensors to measure road quality. This information is essential for the proper functioning of autonomous systems, which must be able to quickly and accurately respond to various situations on the road to ensure safe driving.

Smart cities: Sensor AI in smart cities is used to monitor traffic, air quality, noise levels, and other factors affecting the environment. Modern sensors measure essential parameters and identify specific pollution or problems in public infrastructure. For example, a sensor network in a city can detect gas leaks in the distribution network and automatically alert the relevant authorities, enabling rapid action.

Wearables: Sensors in electronics such as smart watches or fitness bracelets collect data on movement, heart rate and other physiological parameters. This information is not only used for personal monitoring and improving the health of users but can also be shared in the form of anonymised data with research institutions or public health organisations to analyse and predict epidemics or to track population health trends.

Why don’t Czech companies use AI?

Despite all these potential benefits, many Czech companies are still hesitant to implement AI into their processes. There are several reasons for that.

Firstly, there is a shortage of qualified experts in the Czech Republic, who would be able to design and implement AI systems into corporate infrastructure.

According to RSM, a local IT consulting firm, 48% of companies have the technical conditions for rapid implementation of AI, but the development is hindered by both managers and legislation. According to the analysis, specific challenges such as managers’ low willingness to bear the risks associated with pioneering phases of AI implementation, including legislative and security aspects (e.g., personal data protection), are obstacles. It may be difficult to agree across the company on how the corporate AI should work. Moreover, significant revisions of existing legislation and updates to the national AI strategy are needed, a process that is still in its early stages.

Some companies don’t have a clear idea on how to use AI to improve their processes or innovate products and services. This lack of awareness may lead to a lack of motivation for investment in AI technologies.

However, organisations should not resist this trend. In countries such as Japan and the US, AI is already widely used, including in autonomous taxis. Once Czech companies overcome their concerns and embrace AI as an essential part of their operations, they can enjoy higher efficiency, innovation, and a competitive advantage. There is hardly any company that cannot benefit from what AI has to offer, be it from small things such as data processing and analysis, to process automation, automated car control, to fully autonomous factory or shop floor operation.

The age of semantic automation

The combination of Robotic Process Automation (RPA) and Artificial Intelligence (AI) creates a powerful symbiosis that can elevate business productivity and efficiency to a new level. Semantic automation, based on generative artificial intelligence, is a driving technology with the potential to fundamentally change the way companies operate.

In a time when digital transformation is a necessity rather than just a trend, RPA is becoming a major player. With its ability to automate mundane, repetitive tasks, RPA significantly enhances employee productivity. Together with AI, they form a synergistic duo, combining automation with the creativity of the human mind.

According to IDC, automation in companies reduces operating costs by 13.5% and saves an average of 1.9 hours of work per week (source: Worldwide Automation Spending Guide 2022 by IDC) per employee. These figures highlight the transformational potential of RPA and AI in increasing productivity and reducing costs.

RPA and generative AI – the combo for perfect automation

Generative artificial intelligence, as a subset of AI, focuses on the creation of content or data rather than just processing it. It uses machine learning techniques such as neural networks and deep learning to create new content in various forms. “Generative AI models learn from existing data and use this knowledge to produce original, creative and contextually relevant output,” explains Viktória Lukáčová Bracjunová, Head of Robotics and Automation at Soitron.

RPA excels in repetitive tasks, follows rules and procedures, doesn’t make mistakes and doesn’t need breaks. It dominates in structured processes, minimizing deviations. It is a key technology component for companies to reduce costs, reduce errors and speed up routine tasks.

Using semantic automation in a dynamic environment

In a dynamic automation environment, the combination of RPA and generative AI creates a powerful synergy that goes beyond the capabilities of either technology alone. “RPA successfully handles routine tasks, while generative AI is strong at processing complex, unstructured data and solving creative challenges. RPA ensures process consistency and minimises errors, while generative AI analyses data and provides deeper insights, improving the quality of strategic decisions,” says Viktória Lukáčová Bracjunová.

In the field of AI and natural language processing, semantics play a crucial role. It provides the foundation for creating advanced generative artificial intelligence systems that are better able to understand and interact with human language, which is crucial for the success of many AI applications.

Implementable in any company

RPA’s integration with existing systems and applications makes this technology an ideal choice for automating tasks within existing workflows. Next-generation automation can work with a variety of data types and formats, ensuring compatibility with a wide range of processes. Generative AI integration enhances customer experience through personalized interactions, understanding natural language and solving complex queries with empathy.

Where semantic automation can help

  • Advanced Natural Language Processing (NLP) capabilities – it can understand and respond to the customer’s natural language, which is key to automating customer care, order processing and maintaining customer relationships.
  • Machine Learning (ML) – allows robots to learn from data and improve their performance over time, which is crucial for tasks requiring adaptivity or decision making.
  • Optical Character Recognition (OCR) – enables the reading of information from unstructured digitized documents such as PDFs and images.

Meeting the challenges of the modern market

In a rapidly evolving market environment, the combination of RPA and generative AI delivers precision automation, as well as creative innovation, providing a competitive advantage when deployed. Ignoring this technological symbiosis means missing an opportunity. “Now is the right time to let RPA and generative AI technologies collaborate and achieve improved results,” concludes Viktória Lukáčová Bracjunová.

Generational leap! Deploy Cisco Catalyst Center for campus network management

The corporate network infrastructure has undergone fundamental changes in recent years. Campus network management needs to respond to the way IT is currently consumed. Cloud environment, IoT and hybrid working create extreme demands on network performance and security. Cisco Catalyst Center (CCC) is a centralized virtual platform designed to simplify and streamline network management while significantly improving security posture.

Cisco Catalyst Center offers centralized, intuitive network management that makes it easy and fast to design, provision, and apply policies across the entire network environment. The Cisco Catalyst Center graphical user interface provides complete network visibility and uses network information to optimize network performance and deliver the best user and application experience. CCC can be deployed as a hardware appliance, but most customers appreciate the virtual platform option, which is available for the AWS cloud service and now also for VMware on-premise platform.

Firsthand experience

The Soitron team, an implementation Gold Partner of Cisco, is one of the top specialists in deploying Cisco Catalyst Center in corporate environments. Soitron was one of about 60 selected companies worldwide involved in testing the very first pre-production version of the tool (then known as Cisco DNA Center). “We used the platform to manage our own network in the Czech Republic, Slovakia and Bulgaria. We tested the tool for any issues with installation, resources, certificates, and security. Our actual telemetry data were made available to Cisco and used for further development,” said Marianna Richtáriková, Network Business Unit Manager at Soitron.

The scorecard for Catalyst Center

Having first hand practical experience with the tool, Soitron experts were able to identify the areas and situations in which the Cisco Catalyst platform has the highest added value.

Network Design: If you are building a new network from scratch, CCC makes it very easy to design connections in a hierarchical way, adding and defining additional elements in a single tool. Of course, it is also possible to gradually convert the legacy network infrastructure to a modern software-defined network infrastructure.

Centralization: Cisco Catalyst Center enables the centralized management of the entire network, simplifying device configuration and monitoring from a single dashboard.

Automation: CCC provides advanced tools for automating network operations, allowing for fast and consistent network deployment, reducing error rates, and saving time. Any configuration changes can be applied at once across an entire group of devices, minimizing the time a network administrator needs to spend on tedious manual tasks.

Analysis and Diagnostics: The tool provides extensive monitoring and analysis capabilities for network traffic and selected application services. It helps identify problems and respond quickly to outages or security incidents. CCC telemetry provides real-time as well as historical data, making diagnostics much easier.

Security: Cisco Catalyst Center integrates security features and makes it possible to monitor the network’s security status. It helps identify threats and enhance network protection. Automated procedures allow security policy to be prepared in advance and then applied from a single point to any device managed by CCC. For end-users, security policies are applied upon user login (authentication) to the network.

Integration: CCC is designed to be compatible with other Cisco products and technologies, allowing the functionality to be scaled as necessary. It can be connected to platforms such as ThousandEyes for network, internet, and cloud monitoring. An interesting integration is the connection of Apple, Samsung or Intel devices, enabling the monitoring of communication from the device end-user perspective. As for application services, CCC can evaluate and interpret the status of application services such as Webex, MS Teams, and others. An integral part of the solution is also the support for location-based services through integration with DNA Spaces.

Choosing Cisco Catalyst Center makes it possible to create connections not previously possible and transform slow manual processes into automated workflows.

Align your data centre with automation

We have been using automation for decades, and it has permeated virtually all areas of business. It’s not surprising that the operation of something as complex as a data centre is being automated as well. Even though this is the realm of ones and zeros, automation brings the same benefits as in industry: speeding up all operations, eliminating routine manual activities, increasing safety, and solving shortages of specialists.

Why introduce automation into a system where processes run seemingly without human intervention? A layman may think that as long as everything is running smoothly, data centres do not require manual interventions and the human factor usually comes into play only in the event of a crisis, incident, or failure.

However, the philosophy and architecture of today’s IT world has changed so much in recent years that it’s almost impossible to do without automation. This is due to the cloud, the use of SaaS services, and, above all, new approaches to the development and deployment of omnipresent applications. Any company that is serious about digitalization must implement automated processes because it simply won’t succeed in business with the old infrastructure.

Data centres are automated by software that provides centralized access to the configuration of most resources. This lets these technologies and resources be controlled and managed easily, and often without the knowledge of all the technical details. Accessing the required services is much easier and faster. Unlike in the past, when new application request handling typically took days to weeks, now the same task is likely to take just a few minutes.

Modern applications need automation

Companies are most often “forced” to use automation tools when they transition to the cloud or hybrid environments, want to quickly develop and deploy applications, or need to speed up the implementation of new environments and reduce dependence on human resources.

The time when applications were developed and tested over a period of weeks, or even months, is irretrievably gone. Today’s applications are built from many smaller parts (microservices), each of which can be independently changed or upgraded. In addition, development, testing, and deployment require an automated infrastructure that allows for rapid changes and modifications.

Another common scenario when automation is required is a company’s transition to the cloud, or, even more often, to a hybrid environment combining the cloud and an on-premise infrastructure. While clouds are automation ready, your own data centre environment requires automation and then these two environments need to be interconnected.

Finally, the deployment of automation is also motivated by the desire to accelerate the implementation of new environments and resolve human resource shortages. The shortage of top IT experts is a widespread phenomenon, so strategically it is useful for data centres to reduce their dependency on staff leaving the company or making mistakes.

Infrastructure as code

Soitron approaches automation by building automation platforms which allow automation procedures to be defined for any given datacentre environment (cloud, hybrid, or on-premise). This is actually a principle that is like the one used to create source code for applications. Rewriting the entire infrastructure into a code (Infrastructure as Code) brings many advantages over manually creating the environment.

When the infrastructure is defined by scripts, the same environment is recreated each time it is deployed. This approach is most beneficial when the customer wants to centrally manage their environment and dynamically change individual application environments. With scripts it is easy to make changes across the entire infrastructure. Let’s say a company determines that all its databases should be backed up ten times a day and that each database can be accessed by predefined administrators. Then they can set up a new branch and the conditions change. In this case, the company can simply use scripts to increase the number of daily backups or add more administrators as necessary.

Documentation serves as an infrastructure backup

Another advantage of automation is the documentation. The environment is defined by code, which makes it easy to see what is deployed and how everything is configured. The deployment can be replicated at any time (serving as an infrastructure backup) such as in the case of a failure or when you need to create a parallel testing environment.

“Documentation also solves the problem of staff substitutability or dependency because the moment you have a code-defined infrastructure, a much wider range of people can work with it rather than a single IT engineer who happens to remember how the system was originally setup,” explains Zbyszek Lugsch, a business development director at Soitron.

An environment standardized by scripts allows other people to work with such an environment and enter machine- or application-specific parameters. Scripting also allows the same security standards to be applied across the system, including when installing new servers.

Automation is a tailor-made project

To a large extent, data centre automation is always a unique project, as every company uses slightly different technologies and is at a different stage of automation deployment. The course of action is defined by the company’s intention, such as the need to create a suitable environment for developers or the desire to convert an on-premise centre to a hybrid architecture. One of the early steps is a proposal for replacement or extension of the data centre’s existing functions, components, or layers, followed by the implementation of the automation platform and the creation of scripts. The actual deployment takes place by gradually migrating existing systems to the new environment until the “old” environment can be shut down.

Starting to work in an environment that enables rapid application development and deployment will also reduce staffing requirements and offer superior security.

A #Cisco ExpertTip by Martin Divis, a systems engineer at Cisco: Deploy ICO

One of the most comprehensive automation tools is Cisco’s Intersight Cloud Orchestrator (ICO). ICO is a Software as a Service (SaaS) platform that allows managing a wide range of technologies such as servers, network devices, data storage, and more across your entire enterprise infrastructure.

The main advantage of ICO is its multi-domain and multi-vendor approach, which allows the tool to be used regardless of the current implementation. ICO includes an extensive library of predefined tasks that can be deployed for recurring tasks or processes in managed infrastructures. ICO works in a low code/no code design and allows drag-and-drop task setup and running. It is designed with a maximum emphasis on ease of use, making automation operations accessible to a wide range of IT team members.

PV management systems are becoming a ticking time bomb among publicly accessible online control systems

Local companies have started to upgrade the protection of their control systems. According to data from Soitron’s Void Security Operations Centre (SOC), the number of devices exposed and visible on the internet has dropped by 21% since the beginning of 2022; however, the current situation is still not desirable. In particular, the control systems of industrial and domestic photovoltaic (PV) power stations are becoming an alarming danger.

In a year-on-year comparison (01/2022 vs 01/2023), the total number of publicly available Industrial Control Systems (ICS) with at least one of the eight monitored protocols – such as Moxa, Modbus, and Tridium – has been reduced. This was the finding of Soitron’s team of Void SOC analysts. “This is a slight improvement, but in absolute figures it still means there are more than 1,500 vulnerable systems in various organizations, which is still a significant risk. And we’re only talking about the eight most commonly used protocol types. If we expand the set to a few dozen types of protocols, we can find more than 2,300 vulnerable systems,” says Martin Lohnert, the director of the Void SOC. He alsoadds that it would be great if the downward trend was due to the increasing level of security of these industrial systems, which would make them disappear from this report.

Often, however, the opposite is true. An ICS disappears from the internet only after it has been exploited by attackers and has stopped functioning. When bringing it back to life, operators are more careful not to repeat the original mistakes. Unfortunately, they are often just a response to the damage already done.

The PV phenomenon has given rise to a new problem

Despite the overall reduction and a slight improvement in the situation, new vulnerable systems are still being added. “Last year these were mostly control systems for photovoltaic power stations. And that includes both industrial stations with hundreds of installed solar panels as well as home installations,” says Lohnert. In his view, it should be in the interest of operators to make sure that their equipment is protected from internet security threats. Essential steps include changing default login credentials, restricting access, regularly updating firmware, and monitoring for misuse or login attempts.

The potential problem lies in the deactivation of this system, losses due to solar power generation disruption, and the cost of repair. At the same time, a successful penetration into a poorly secured industrial system can allow an attacker, often undetected, to attack more important systems essential for a company’s operation. This may cause the organization to stop functioning completely, with losses running into millions of Czech crowns.

The Czech Republic is lagging behind, but there is a solution

Although Soitron’s Void SOC has recently seen a positive trend, it is very likely that the number of potential risks will increase. In the Czech Republic and Slovakia, the major digitalization of industry is yet to take place. Both countries still lag far behind other EU countries in many aspects of digital transformation. Out of the twenty-seven EU countries, Slovakia is ranked 24th and the Czech Republic is ranked 20th in the Digital Economy and Society Index (DESI), which has been tracked by the European Commission since 2014. With the progress of digitalization, new technologies are gradually being introduced, such as production control systems, various sensors, programmable logic elements, and human-machine interfaces. If care is not taken to secure them, the figures in this survey will rise.

It is also clear that in order for digitalization to significantly shift the current state of cybersecurity, many industrial enterprises will need to invest in the tools, technology, and specialists to operate them; however, in most organizations (especially SMEs) this is not possible. “The most common reasons are insufficient funding and a general lack of qualified cybersecurity professionals. We therefore expect that the situation is likely to worsen before there is more awareness and a shift for the better. The safest solution is to use services of experts who will fully take care of the security of your business systems and infrastructure. Our monitoring centre can have everything under control 24 hours a day, 365 days a year,” adds Lohnert.

The NIS2 Directive can increase the security level of organizations in the Czech Republic

The European NIS2 (Network and Information Security Directive 2) can make things more difficult for Czech organizations, but it can also help them solve their cybersecurity problems. This is particularly the case for those organizations that have not yet addressed this serious threat until now or could not justify the necessary budget for sufficiently qualified staff.

The NIS2 Directive aims to make the EU’s digital infrastructure more resilient to cyber attacks and improve coordination and incident response capabilities. “Many entities in the Czech Republic and elsewhere are not taking these matters seriously enough. This is due to the fact that there is a shortage of IT experts – let alone cybersecurity experts – on the market. Since the entities affected by the new directive will be obliged to ensure that their IT networks and information systems are sufficiently protected against cyber threats, this problem may become even worse,” says Petr Kocmich, Global Cyber Security Delivery Manager at Soitron.

What the directive changes

The institutions concerned must implement measures to prevent cyber attacks and threats, such as performing regular software updates, securing network devices, and providing protection against phishing attacks. In addition, they must prepare contingency plans for cyber incidents and establish mechanisms to deal with them quickly and effectively.

Major incidents must be reported within twenty-four hours of becoming aware of the incident and cooperation with national security authorities will be required. Any company that fails to comply with these requirements may be subject to fines and other sanctions.

Dvě mouchy jednou ranou

It would be great if the NIS2 Directive could help end the shortage of cyber security experts; however, this is unlikely, and, at first glance, it might even seem to exacerbate the problem. Having said that, the new regulation is an excellent opportunity to make organizations more secure. External cybersecurity service providers can help. They have sufficient capabilities that organizations are lacking. “Specialized companies focus on providing these services and can help entities implement security measures and risk management as a complete package, i.e. a turnkey service or a solution delivery including support and compliance with the NIS2 Directive,” says Kocmich.

notebook

Specialized companies can help organizations solve both problems in ensuring compliance with the new requirement and improving previously incomplete security of their IT infrastructure and information systems; however, even if organizations use the services of such providers, the responsibility is still theirs.

They should choose their service provider cautiously and ensure that they are sufficiently qualified, experienced, and certified. It is also important to make sure the tasks are properly assigned and that the contractor’s performance is monitored. To ensure the efficiency and effectiveness of the model, the roles and responsibilities should be clearly defined in the contract between the organization and the cybersecurity service provider. It should be understood that the quality of the service delivered often reflects the quality and management capabilities of the provider.

Who the NIS2 Directive applies to and from when

The directive will mean greater obligations for companies in the Czech Republic in relation to network and information system cybersecurity and protection. However, it will also improve protection and resilience against cyber threats and cooperation between European countries. Last but not least, meeting the requirements of the NIS2 Directive can help organizations gain the trust of their customers and partners, who will be more satisfied with the protection of their data and information. Overall, the directive could help entities to improve their security practices and minimize risks.

The NIS2 Directive applies to electricity producers, healthcare providers, electronic communications services, and over sixty other services categorized into eighteen sectors. In the Czech Republic, the directive will start to apply on 16 October 2024 and will cover up to 15,000 entities – these are medium and large companies with over fifty employees and companies with an annual turnover of over CZK 250 million. Although the NIS2 Directive will only apply to organizations that meet the defined criteria, and others are not directly obliged to comply with the requirements, it is worth considering using it as a recommendation for improving general cybersecurity in other companies.

Opportunities for other entities

“It is estimated that up to 70% of domestic organizations have a problem with their cybersecurity. Smaller and medium-sized enterprises in particular do not have sufficiently secure IT systems and do not comply with basic security measures,” says Kocmich. Common problems include benevolent user access rights, a lack of two/multi-factor authentication in combination with weak passwords (including those of administrators), mismanagement and the decentralization of user identities, outdated and unpatched hardware and software containing vulnerabilities, missing network segmentation, weak or missing email and Internet access protection, inadequate perimeter protection, low visibility into network traffic, low or missing endpoint security, a lack of central log management, and inadequate employee training. “Cybersecurity is a big issue for many companies in the country. They can become easy targets for attackers. The NIS2 Directive should help raise awareness and protection against cyber threats,” adds Kocmich.

For more information on obligations under the NIS2 Directive, see the dedicated website (http://nis2.nukib.cz) of the National Cyber and Information Security Bureau (NCISB).

The effect of misconfigurations on business

International and Czech organizations continue to move their IT systems and data to the cloud environment. However, moving to the cloud is not just about migrating data. It is also about changing the access of system administrators, and this often brings new challenges and configuration procedures. During the migration process, it is easy for something not to be taken care of, set up, or configured properly in accordance with best practices. This leads to “misconfigurations”. As a result, companies are unnecessarily exposed to more attacks than before and cannot adequately defend themselves against them.

Both cloud and on-premise solutions offer clear benefits and address specific challenges and needs of organizations. However, taking the existing fully local IT infrastructure and moving it to the cloud without making necessary changes (the so called “Lift & Shift” approach) is a common mistake. Both on-premise and cloud environments have their pros and cons, which is why customers often use hybrid environments. The reason for this solution is usually a legal requirement (due to data sensitivity and where this data might be physically stored), the architecture, and the complexity of legacy applications that can be made compatible with the cloud only with disproportionate investment and efforts or not at all.

Forcing it is not acceptable

Migrating to the cloud can help organizations reduce IT costs (if cloud resources are used appropriately) and have more computing power. More importantly, they can have more scalable performance available at any time, increase storage flexibility, and simplify and accelerate the deployment of systems and applications, while accessing data and systems from anywhere, anytime, 365 days a year.

However, as far as cyber security is concerned, deploying the cloud can increase the likelihood of an organization being attacked by malicious actors. If the “let’s go to the cloud” decision is made, it should be taken with due responsibility. First and foremost, it is important to understand that the cloud as such is a shared responsibility between the cloud service provider and the customer, so the cloud is never a panacea. We can talk about choosing the right model (IaaS/PaaS/SaaS), but if we want to relieve the inhouse IT/SEC team, the right way should be the PaaS and SaaS model, where most of the responsibility falls on the cloud service provider. In addition, the act of moving to the cloud should also be seen as an opportunity to move to a modern and secure corporate infrastructure solution. At the same time, we must not forget to involve the security department, which should be a fundamental and integral part of any project like this.

Unfortunately, most cloud migrations often mean just forcing and moving the existing system in its current form. This means that companies should ideally start utilizing native cloud resources, which often requires the replacement of existing monolithic applications. Otherwise, they gain nothing by simply moving their systems and data to the cloud, and it will most likely cost them more than their original on-premise solution.

cloud

Misconfiguration playing the main part

Today’s on-premise solutions are relatively well-equipped with security monitoring and auditing tools in terms of established and proven standards, but this is not necessarily true for migration to the cloud.  Cloud misconfigurations are vulnerabilities waiting to be noticed by attackers. These are gateways through which it is possible to infiltrate the cloud infrastructure and, thanks to the interconnection and hybrid mode, also laterally affect the existing on-premise infrastructure. This allows attackers to exfiltrate data, including access data, telemetry data of machines in the OT environment, health records, and personal data, and then do something like deploy some ransomware.

According to experts, an average enterprise has hundreds of misconfigurations every year, but their IT departments are unaware of the vast majority of them. All misconfigurations are the result of human error and missing cloud configuration health check tools (e.g. Cloud Security Posture Management – CSPM).

The impact of cloud misconfiguration on system security

When migrating systems, what often happens is that selected services that had only been available internally within the on-premise solution are exposed to the public online space after the migration without any filtering and blocking of external network traffic. Many companies suffer from this, including critical infrastructure operators. It may therefore happen that a console for controlling industrial control systems becomes publicly available online. We recently detected an ICS console of a production and assembly line control system available online without any authentication required. What we often see are services containing exploitable vulnerabilities without any additional security. The security may have been deployed in the on-premise solution but has not been implemented in the cloud (e.g. a missing Web Application Firewall). Quite common are services with default credentials and services used for the remote management of customers’ internal systems or even freely accessible sensitive data.

This is why there are dozens to hundreds of incidents per month, as seen in the statistics of our monitoring centre. Security misconfigurations become easy targets for attackers who know that they are present in almost every enterprise. This neglect can have disastrous consequences. It helps attackers to reconnoitre and infiltrate customer environments, create persistent links for remote access, take control of systems, and exfiltrate data and login credentials, which are then disclosed or sold to be used for further attacks. Alternatively, it opens the door to lateral ransomware or cryptojacking attacks, in which cloud computing resources are exploited to support cryptojacking activities.

Steps to minimize the risks of misconfigurations

Configuration management, and especially monitoring, requires a multifaceted approach.

Organizations should implement well-established security practices, such as regular Cloud Security Posture Management assessments, to help detect a range of security defects and misconfigurations. It is important to follow the Least-Privilege principle and to continuously monitor and audit cloud systems.

Maintaining sufficient visibility of cloud assets should be a priority, just as it is in on-premise solutions. Strong identity and access management helps scale permissions to ensure the right level of access to cloud services.

The identification and prevention of various misconfigurations during cloud migration help enterprises eliminate major security issues. Specialized companies can help by guiding the organization through the entire process and setting everything up correctly.

The ChatGPT AI chatbot could be a gamechanger in the cybersecurity, experts say

From the surgical debugging of programming code, to instantly writing an entire block of functional code, and the stopping of cybercriminals, OpenAI’s newly launched popular ChatGPT AI chatbot is changing the game and its capabilities are virtually limitless. And not just in IT.

It has only been around since 30 November last year, but in just a few months it has already been discovered by millions of people around the world. We are talking about an artificial intelligence platform able to answer any question and help with various problems. ChatGPT can answer any general question; write letters, poems, and articles; and even debug and write programming code.

How the ChatGPT AI robot works

This conversational chatbot, backed also by the well-known visionary Elon Musk, who has been involved in AI for years, was developed by OpenAI. ChatGPT is designed to interact with humans in an entertaining way and answer their questions using natural language, which has made it an instant hit among professionals as well as the general public. It works by analysing huge amounts of text. Most of the texts were sourced from the internet, but the chatbot is currently not connected online, which means it won’t tell you the result of yesterday’s Sparta vs Pardubice game. It sees the interaction with the user in context, and hence it can tailor its response to be relevant to the situation. In this way, everyone can learn something.

Experts even suggest that the AI chatbot has the potential to replace the Google search function in the future: “Another very promising feature is its ability to write programming code in any user-selected programming language. This helps developers work on and debug their code, and it helps experts secure their systems,” points out Petr Kocmich, the Global Cyber Security Delivery Manager at Soitron.

How ChatGPT can be used by developers

Today, writing code is not a problem for ChatGPT. What is more, it is absolutely free. On the other hand – at least for now – it is advisable to avoid having the chatbot generate complete codes, especially those that are linked to other codes. The current form of the platform is still in the early stages of development, so it is naive for programmers to expect it to do all the work for them. Having said that, coders and developers can still find the tool useful.

They can use it to find bugs in the code they have written. And they can also finetune a problematic code they had spent long hours writing. ChatGPT can help find a bug or a potential problem, and it can offer a possible solution to end those sleepless nights. Its ample computing power saves hours of debugging work and can even help develop source code to test the entire IT infrastructure.

There are some risks

Without much exaggeration, it could be said that ChatGPT can turn anyone into a cybercriminal, making it easier to carry out a ransomware, phishing, or malware attack. It may seem that the AI robot just needs to be asked to “generate the code for a ransomware attack” and then you just wait for the result. But, as Kocmich points out, it’s fortunately not that easy: “Conversations are regularly checked by AI trainers, and responses to this type of query, as well as other potentially harmful queries, are restricted by ChatGPT. Actually, the chatbot responds by saying that it does not support any illegal activities.”

On the other hand, even if it evaluates a question to be potentially harmful and thus refuses to give an answer, this does not necessarily mean that people can’t get to the answer some other way. “The problem with these safeguards is that they rely on the AI recognizing that the user is trying to generate malicious code; however, the true intent may be hidden by rephrasing the question or breaking it into multiple steps,” says Kocmich. Moreover, nobody can guarantee that some other similar AI robot would not refuse to answer such a question.

What to think about ChatGPT


As is often the case, there are two sides to every coin. While AI bots can be exploited by cybercriminals, they can also be used to defend against them. In the meantime, coders could gradually turn into “poets”. They would tell the AI chatbot that they need to write such-and-such a code that does this and doesn’t do that, or describe the same in a case study, and then they just wait for the AI bot to generate the code.

“Already, ChatGPT is being used by security teams around the world for defensive purposes such as code testing, reducing the potential for cyber-attacks by increasing the current level of security in organizations, and training – such as for increasing security awareness,” says Kocmich, adding in the same breath that we should always bear in mind that no tool or software is inherently bad until it is misused.

Cloud communication makes customers happier and employees more loyal

To acquire and retain customers is a challenging task in a competitive environment. Now, more than ever, a high-quality product or service is a must. But it’s not enough on its own. The key factor determining if a customer stays or goes to a competitor is customer experience. And that is based on communication.

In retail, as well as other sectors, it is likely that the first thing a customer comes into contact with is a contact platform they communicate with when choosing or returning goods, or when booking an appointment. “Today’s digitally savvy consumers demand a fast, interactive, and personalized response; however, given the multitude of various communication channels and platforms, it may be difficult to keep the customer’s journey clear and simple,” explains Marcel Vejmelka, Senior Business Consultant at Soitron. Thankfully, there are solutions on the market that eliminate this problem and bring this very important part of business to the next level.

It is essential to get rid of silos

There are many systems and platforms ready to meet all requirements. One platform designed specifically for this purpose is Webex Cloud Contact Center (Webex CC): “By deploying it, you can break down communication silos and create an omnichannel environment in which separate channels work together. This includes voice services, SMS, chatbot, web, WhatsApp, Facebook, and other communication channels,” says Vejmelka. This solution also goes beyond a simple connection with the customer. It is directed inwards and supports communication with the company’s business systems and in-house customer data.

It is this interconnection, trend analysis, feedback, and identification of various events in a customer’s life that enables an organization to move towards a personalized customer experience. “For instance, you can set up an automated process to trigger specific communication a month and a week before (or after) a contract expires. You can also customize contacts to specific types of customers, such as loyal customers, high spenders, and Apple or Android users. The possibilities are virtually endless,” says Vejmelka.

A platform as a service

Cisco has built the entire platform as a service; its parameters are set explicitly to fit the needs of a specific customer. The platform is rented by a partner/integrator. Before it can be deployed in an organization, it is necessary to conduct a business process analysis. “It is only then that the business processes are implemented and deployed in the platform. This ensures that the investment actually makes sense,” says Jaroslav Martan, Cisco’s Collaboration Specialist.

#CiscoExpertTip – odborníka Cisco pro oblast collaboration Jaroslava Martana: Webex CC nepotřebuje složité programování

The Webex CC platform is a low code/no code solution. The individual building blocks of the platform can be moved within the communication logic with no complex programming: “This is a major benefit because it allows sales people who are most familiar with the intricacies of customer relationships to be directly involved in the actual platform configuration,” Martan explains.

Since it works with customer data, Webex CC also carefully addresses the issue of security. Cisco cloud solutions adhere to the highest security standards. For example, the content of all phone calls is encrypted so that no unauthorized person can play it back. For European customers, Cisco guarantees that their data will not leave the EU as well as GDPR compliance.

kontaktne centra kresleny obrazok cisco expert tip

Getting rid of operator turnover

Webex CC offers more than just benefits for customer relationships. In today’s environment, where hybrid working has become very popular, the system offers functionality for remote operators, allowing them to work from anywhere and at any time. The platform eliminates the majority of on-premise hardware, servers, and other devices requiring regular upgrades or replacement.

With Webex CC, there is no need to build and maintain a physical contact centre. This solves the problem of employee turnover, which is usually huge in this service segment. Operators who work remotely are happier and more loyal, and they ultimately know much more about customer needs. For example, at the telecommunications giant T-Mobile, they were able to move 12,000 operators from contact centres to remote environments with the help of Webex.

No closing time and excellent ROI

Once deployed, Webex CC’s sophisticated configuration delivers very concrete and precisely quantifiable savings. In a paper entitled “The Total Economic Impact of Cisco Webex Contact Center”, Forrester Research analysts calculated that the payback on Webex CC would take thirteen months and that the ROI would be 262%. The cloud platform has no limitations on operation times and can run 24/7 with no problems. It is a more than suitable solution for the e-commerce, healthcare, and financial services segments.

5 basic questions that need to be answered when choosing an RPA supplier

Viktoria Lukáčová Bracjunová

VIKTÓRIA LUKÁČOVÁ BRACJUNOVÁ

Digitalisation, robotisation, and automation have become an essential part of the strategic thinking of all managers and entrepreneurs who look for ways to improve performance of their companies so that they do not miss the boat and lose their competitiveness.

Anyone who has learned about the benefits of Robotic Process Automation (RPA) and decided to use this technology to increase people’s productivity, reduce costs, speed up processes, and improve efficiency and customer service is inevitably faced with the dilemma of choosing a supplier.

Since process automation solutions can be based on several different technologies, and companies that implement RPA systems may be very different in nature, choosing the right supplier can be quite challenging. Soitron consultants have suggested five questions that need to be answered carefully when choosing an RPA supplier.

  1. What technology will we use?

    For robotic process automation, it is necessary to choose a supplier who will implement the RPA solution as well as the software technology the solution will be based on. The most frequently used tools include UiPath, Automation Anywhere, and Blue Prism.

    Naturally, each of them has its pros and cons that make it more suitable (or unsuitable) for certain cases. They differ in what options they offer, the ways they are used, their abilities to collaborate with existing systems, and in their licence pricing.

    This is why if the decision on which RPA to use is made by a company’s business department, it is advisable to also consult IT specialists or choose a supplier who is a partner of several RPA tool manufacturers, who can thus recommend the most suitable technology to each customer based on practical experience.

  2. What type of supplier should we choose?

    Today RPA solutions are offered by a plethora of different companies, such as consulting firms, software houses, and system integrators. There are pros and cons associated with each type of supplier.

    Companies who have a core business of consulting may emphasise that they have a good understanding of business processes and have extensive experience in optimising them. On the other hand, technology companies may highlight the technical know-how needed to acquire digital data from various systems and integrate the RPA with the existing IT environment.

    The ideal choice seems to be a supplier who has the consulting capacity for optimising business processes as well as strong development and integration skills that allow it to overcome potential technical issues and make automation work well with existing information systems and applications.

  3. How are we going to operate the RPA?

    In addition to choosing the tool and the supplier when implementing a robotic process-automation solution, it is necessary to decide how you intend to operate the software and how you want to pay for it. One option is to purchase all the necessary licences and install the RPA in your own IT environment.

    However, a good supplier will also allow its customers to use software automation as a service. This option may be especially suitable for companies that do not have strong IT departments, or who lack the time and human resources to dedicate to automation.

    Moreover, few companies have processes that require a non-stop robot usage. Some companies need to activate automation for no more than one to two hours a day. In such cases, it may be more cost effective not to purchase licences but rather to pay only for the time that the robot actually works.

  4. How do we maintain and further develop the RPA?

    Software process automation solutions are not off-the-shelf software. That is why it is better to avoid suppliers who approach RPA implementation as a one-off project which is completed by signing the acceptance protocols.

    The way a supplier suggests handling the implementation says a lot about their approach. There is a world of difference between blindly following a customer’s assignment and striving to understand and analyse in detail what impact the automation will have on other related processes. It is equally important to test the solution in different scenarios before handing it over.

    Thorough documentation and testing of the automated process reduces the risk of the solution not working properly. However, a fair supplier will also monitor the process in its live operation for some time and then guarantee support for possible changes in the future.

  5. Will data be handled in compliance with legislation during automation?

    Due to strict data protection legislation, companies must be extremely careful when handling data, including data processed by software robots.

    That is why when choosing an RPA, one should inquire if the supplier can prove what activities the robot performs during the process and if they can support this with corresponding documentation if necessary. It is also important to make sure that companies involved in the project have developers who are certified according to the standards of the technology being used. Moreover, the supplier should guarantee that no data is permanently stored anywhere where it could be exposed to the risk of misuse or of being leaked without the customer’s knowledge.

    Before handing the solution over, a meticulous supplier goes through the source code with the client and transparently explains what the software robot does in individual steps, what data is accessed by the robot, and how it is handled.
Viktoria Lukáčová Bracjunová

Viktória Lukáčová Bracjunová

Business & New Technologies Products Development Manager
viktoria.bracjunova@soitron.com

Four myths about corporate IT: What managers believe versus reality

When we look at managers and the world of IT, we often hear about the disconnect between the two. But let’s start with something they have in common – there are few other functions in a company that would be subject to so much cursing and gossiping by employees than managers and IT staff. However, there are many other reasons that make them ideal allies. Paradoxically, the most important reason is company development and competitiveness.

The following article is intended for managers who do not understand their IT staff – be it their own employees or an external provider taking care of hardware, software, data, and applications. Few companies can afford not to have such people. Indeed, for more and more companies, the interplay between their IT department and top management decides whether they survive and enjoy success or get technologically outperformed by their competitors. Together with consultants from Soitron, we have prepared the following list of myths about corporate IT.

1. IT investments do not return

“Managers often complain that there is a mismatch between IT investments and their actual benefits for the company,” says Zbyszek Lugsch, who helps companies improve their technological base. “This is often because companies are unable to quantify their dependence on IT, and this is despite the fact that their core business often depends on it. Today, for many companies, their IT essentially generates revenue.”

According to Lugsch, this misconception has historical roots. In the past, IT staff were perceived more like maintenance workers. Today – when even technologically conservative industries such as manufacturing, engineering, and heavy industry are undergoing the process of digitalization – the role of IT specialists is different. “Our added value is not that we would take and configure something but rather that we come to a company and think about how it operates and how it could work more efficiently without IT being a barrier.”

IT is no longer a competitive advantage only for software development companies or digital service providers. It can also bring an increasingly strong competitive edge for companies in the traditionally conservative heavy industry sector.

Lugsch believes that cloud solutions have made cost efficiency calculations easier, because in a few clicks a company can easily choose storage, servers, and other services – such as a warehouse database or a mail server. However, Lugsch says that such a simplified calculation of the price of partial services distorts the total long-term costs. In this respect, the cloud can actually turn out to be more expensive.

2. The cloud can fully replace a company’s own servers

“Let’s take the example of a local manufacturing plant that produces parts for car manufacturers using a ‘just in sequence’ system [i.e. parts arrive at the assembly line exactly at the moment they are needed]. They have their information system in a data centre in Germany. Can you imagine what would happen if they lost connectivity to Germany? Their production would stop. In their case, this would mean fines that they have no chance of paying,” Lugsch says in describing a typical example. The company solved the issue by creating a local “mini-data” centre with a copy of their central information system. According to Lugsch, there are many such cases. Most companies need at least one application critical to their business, and they certainly need data availability.

“It is important to realize that what data centres usually guarantee as part of their basic package is 99.9% service availability rather than data availability,” says Lugsch’s colleague Štefan Pater. “Services include things like email. It will run with 99 per cent availability – i.e. the mail server may be down for only a few hours a year. However, the availability of company data, such as the data you have stored in your SharePoint, is something entirely different. Information about data availability is often hidden somewhere deep in the small print. If you manage to find it, you realize how much it would cost you to ensure the 99 per cent availability of your files. As a result, you need to either create some backup scenarios so that you can restore the data, or you need to buy another service. And suddenly the price is in a totally different ballpark.”

3. Owning servers is expensive

In the past, a company’s IT infrastructure consisted of three separate parts – data storage, servers providing computing power, and networking. Such hardware, which was physically located and connected somewhere in the corporate server room, is referred to as the “traditional architecture”. Building such an architecture was certainly an intensive investment, and because of this many companies decided to move their servers and data to the cloud. But then they started to experience problems with application speed, data availability, and often also with price. A hyperconverged infrastructure emerged in the meantime as a new innovation. This is often cheaper, although paradoxically it is an on-premises IT solution. The original three parts of the IT infrastructure remain, but now they are integrated into a single “box” with unified and automated management.

infografika 3 základne piliere firemnej IT infraštruktúry

“Hyperconverged solutions have brought the price of on-premises solutions down so much that when I compare the cost of the same computing power, servers, storage, and so on in the cloud, and I also include management costs, I realize that having my own on-premises infrastructure would be actually less expensive. In other words, a company could benefit from more power than in a cloud for the same money.”

4. By having things in the cloud, I do not need as many IT managers

“No one wants to spend more money on IT than is necessary. And even if they wanted to, there is a lack of skilled people in the market. In large companies, the teams are larger but they are not growing either. The shortage of people with necessary skills impacts them even more,” says Marianna Richtáriková, an IT network expert at Soitron.

“Finding capable IT people is a real problem these days,” says Lugsch. However, claims of “the end of IT departments” in magazine headlines at the peak of the cloud craze have proved to be overoptimistic. “Over time, it proved to be total nonsense. Paradoxically, this was all the more apparent with smaller companies.”

Utopian visions versus real experience: What lessons have companies learned from the cloud?

“The end of the IT department – is it in the cloud?” This was the question asked by the headline of the British magazine Computer Weekly back in 2009. Ten years ago, this new five-letter technology started to appear on the covers of technology and business magazines. Which parts of what was promised came true? And which parts have been a lesson in over-optimism? Together with three IT specialists from Soitron, we dissected the cloud into bits and pieces.


One of the first cloud ads dates back to 1993. It was an ad by AT&T simply entitled “What Is The Cloud?” (the video is still available on YouTube). The three advertised key benefits of this completely unknown technology were choice, control, and convenience.

The king is dead. Long live … many kings

There is no denying the fact that the cloud has made IT services more accessible. All of a sudden, the traditional players had new competition. Who would have said in its early days that one day Amazon would be selling computing power and data storage as well as material goods?


The arrival of the Windows 7 operating system in 2009 was a major milestone for the cloud. Despite the commercial success of this new system, the cloud market has been much more strongly affected by its competitors.
Source: reprofoto, The Economist

The cloud simplified the calculation and comparison of corporate IT costs. “The cloud made it much easier for IT people to provide relevant arguments about the costs. This is much clearer for CFOs – unlike in the past, when IT departments would say that, in addition to CRM systems, they also needed to buy additional data storage, add servers, increase computing power, and so on,” says Zbyszek Lugsch, an IT consultant from Soitron. On the other hand, it led to the misleading impression that the cloud was always less expensive. “Since the advent of the cloud, new technologies have emerged, such as hyperconverged infrastructure: this has brought the price of on-premises solutions down so much that when I compare the cost of the same computing power, servers, storage, and so on in the cloud, and I also include the management costs, I realize that having my own on-premises infrastructure would actually be less expensive.”

IT specialist: The cloud is not for everyone, but every company can enjoy its benefits

Many companies cannot afford to move their sensitive data and critical applications to the cloud, but they would love to use something that is just as easy to manage – where storage or a new server is just a few clicks away, and you can use it immediately.

One solution is a “hyperconverged infrastructure” integrating all three hardware components – storage disks, computing power, and networking – into a single “box” with unified management. This eliminates the need to change or set up hardware every time a new application or an upgrade is deployed.

Our data is safe … somewhere else

The problem is more than just the protection of sensitive data and GDPR compliance. Many companies cannot migrate fully to the cloud, because the downtime or unavailability of their critical applications would cripple their operations.

“Let’s take the example of a local manufacturing plant that produces parts for car manufacturers using a ‘just in sequence’ system [i.e. parts arrive at the assembly line exactly at the moment they are needed]. They have their information system in a data centre in Germany. Can you imagine what would happen if they lost connectivity to Germany? Their production would stop. In their case, this would mean fines that they have no chance of paying. That is why they have a second data centre in the plant as part of their own hyperconverged infrastructure, where they keep a copy of the information system,” explains Lugsch.


The automotive industry is a sector where there is simply no room for downtime.

This is why some companies that have tried the cloud are considering at least a partial return to their own “on-premises” infrastructure. “For instance, large retail chains used to have strictly centralized systems, but today they tend to switch to distributed systems. This means that they have at least part of their infrastructure in their subsidiaries.”

Sit back, relax … and wait

Lugsch’s colleague Marianna Richtáriková is in charge of computer networks. “Our customers, even the really large ones with the latest broadband connections, sometimes experience sudden traffic overloads and application slow-downs, and then we discover, for example, that this was caused by Microsoft updates,” says Richtáriková, correcting the misconception that with high-speed guaranteed internet corporate IT would be as fast as if the company had it on their own premises. “If someone had believed that the connections would become so inexpensive that it would not be a concern anymore, and that the capacity would increase, this turned out not to be entirely true.”

The bottleneck is not just the line speed and throughput but also the availability of services and, more importantly, data. Many data centres guarantee 99 per cent (or even higher) availability. But it is important to know what it actually means.

The availability of services and the availability of data are two different things. “Services include things like email. It will run with 99 per cent availability – i.e. the mail server may be down for only a few hours a year. However, the availability of your corporate data, such as the data you have stored in your SharePoint, is something entirely different. Information about data availability is often hidden somewhere deep in the small print. If you manage to find it, you realize how much it would cost you to ensure the 99 per cent availability of your files. As a result, you need to either create some backup scenarios so that you can restore the data, or you need to buy another service. And suddenly the price is in a totally different ballpark,” says Soitron consultant Štefan Pater in conclusion.