Key takeaways:
The text criticizes seemingly sound cyber risk assessments that do not translate into engineering decisions or product maintenance. It shows a paradigm shift toward a risk-based approach grounded in the real context of use and the entire lifecycle.
- A typical “cyber risk assessment” is often just a low/medium/high table: compliant from a compliance standpoint, but it doesn’t affect the product architecture or support.
- The EU approach (CRA, MDR, the Machinery Regulation 2023/1230) shifts the focus to the conditions of use and risk control throughout the lifecycle.
- A product risk assessment is not an IT analysis, a pentest, or an ISO 27001 implementation; it concerns the ability of the product and the manufacturer to maintain security over time.
- A vulnerability (e.g., a CVE) is a technical flaw; a product’s risk depends on the context of use, integration, user behavior, patch management, and incident response.
- Incorrectly selected safeguards can increase risk: MFA in OT, automatic updates in IoMT, and locking down interfaces in IoT create new attack vectors and side effects.
In the vast majority of technology companies, a process known as cyber risk assessment already exists. It typically takes the form of a large table, a complex matrix, or a spreadsheet dominated by “low”, “medium”, and “high”. This document contains a long list of generic threats, brief descriptions of potential vulnerabilities, and a set of standard control measures. From a formal standpoint, everything checks out: the document is complete, signed off by management, and safely archived.
The problem is that, in engineering practice, this document changes absolutely nothing.
It does not influence the architecture of the product being designed. It does not drive decisions on how updates will be delivered. It does not change the after-sales support model, and it does not reassess relationships with component suppliers. It is a compliance-correct document, but entirely irrelevant to real product cybersecurity. This is not due to a lack of competence in engineering or security teams. It is a problem of a fundamentally flawed starting point.
Most traditional risk analyses start with the question: “What threats might occur?” Meanwhile, the European Union’s new regulatory approach—clearly visible in instruments such as the Cyber Resilience Act (CRA), medical regulations (MDR), or the new Machinery Regulation 2023/1230—forces a completely different perspective. It starts with the question:
Under what conditions of use can a product become a carrier of risk to the user, the system, or the market—and is the manufacturer able to control that risk throughout its entire life cycle?
This is a fundamental paradigm shift. Cyber risk assessment in a product context is not simply another threat analysis for IT infrastructure. Nor is it a penetration test or a box-ticking implementation of ISO 27001 guidance. It is an in-depth analysis of the product’s capabilities—and the manufacturer’s capabilities—to maintain security over time, in a real operating environment.
Technical vulnerability vs. product risk — a key distinction in cybersecurity
For a risk assessment to stop being just an administrative table and become an engineering decision-making tool, we must clearly separate two concepts that are routinely confused in the market. Remember: a vulnerability is not the same as product risk.
- A technical vulnerability is a specific, identifiable flaw in software, hardware configuration, or the design itself. It may be improper input validation, an outdated software library, or a weakness in an authentication mechanism. Such issues are recorded in databases such as the CVE system (Common Vulnerabilities and Exposures). But this is still an assessment at a purely technical level.
- Product risk only emerges when that vulnerability (or even its temporary absence) is placed into the harsh realities of actual use.
This context includes a range of variables: the operating environment (open vs. isolated network), how the product is integrated with other systems, user behavior, the availability of security updates (so-called patch management), and the manufacturer’s organizational ability to respond to incidents.
A product may have no known vulnerabilities on launch day and still create critical risk if its maker lacks long-term support processes. Understanding this difference brings order to the entire engineering effort. This is exactly what the risk-based approach is built on. Security measures must be proportional to foreseeable misuse scenarios, not to an abstract list pulled from the internet.
The security paradox: When protection increases risk in IT and OT
In cybersecurity design, it is often assumed that adding a new “padlock” always reduces risk. Practice shows it can be exactly the opposite. A measure implemented without understanding the context creates new threat vectors.
1. Strong authentication in an industrial (OT) environment
Imagine implementing multi-factor authentication (MFA) on an industrial machine. From an IT perspective, it is an exemplary step; however, cybersecurity in industrial automation is a completely different reality. In a factory environment, the equipment runs 24/7, and service interventions happen under time pressure. When the network fails, the technician cannot authorize the token online. Production stops. The result? A frustrated customer disables the mechanism permanently, completely bypassing the protection. The measure that was meant to protect has created a major operational risk.
2. Automatic updates in medical devices (IoMT)
Rapidly patching vulnerabilities reduces exposure to attacks, which is standard practice in IT. However, in the medical device world, a forced update during an ongoing procedure can change system behavior or break communication with the hospital network. In this case, a technological safeguard creates a critical clinical and regulatory risk (loss of MDR certification).
3. Locking down interfaces in consumer devices (IoT)
A manufacturer physically removes local diagnostic ports from a smart home device to make it harder for hackers to gain access. In practice, authorized service providers start troubleshooting via unstable interfaces while bypassing encryption, and advanced users install modified software at scale (custom firmware). The result is a complete loss of control over the ecosystem by the manufacturer.
A genuine cyber risk analysis asks: “How will the stability, usability, and safety of the entire ecosystem change after adding this specific mechanism?”.
The 5 most common mistakes in risk analysis for technology products
When assessing processes in companies bringing electronics and software to market, cybersecurity experts see the same patterns repeating.
- Copying the corporate IT model into the product world. A product is not a data center. It operates in an unknown environment, often offline, configured by non-experts. Applying IT tools to product analysis ends up ignoring key issues: the device lifecycle and the lack of enforced updates.
- Analyzing abstract threats instead of real scenarios. Entering phrases like “unauthorized access” into a table is analytically useless. The analysis must be grounded in specifics: Who? Through which interface? For what purpose? With what outcome?
- Ignoring the product lifecycle (End-of-Life). Risk analysis typically focuses on the launch moment. Meanwhile, risk increases over time—open-source libraries age, and component suppliers end support. Without accounting for a maintenance model, the analysis is misleading.
- Working in isolation from the design team (R&D). Treating risk assessment as a pre-audit checklist is a straight path to disaster. The results must flow back to engineers as hard architectural requirements (Secure by Design).
- Fetishizing technology at the expense of procedures. Companies invest a fortune in advanced cryptography, but have no idea who in the organization is responsible for releasing a security fix (patch) after a researcher reports a vulnerability (Vulnerability Disclosure Program).
How to conduct an analysis compliant with the Cyber Resilience Act? 7 engineering steps
Cyber risk assessment must stop being a bureaucratic burden. Below is a market-ready operating model aligned with the EU approach (including CRA requirements).
- Define the product boundaries. A product is not just hardware. It also includes the mobile app, cloud, OTA servers, and API interfaces.
- Describe the harsh reality of the operating environment. Don’t describe a lab environment. Answer questions about actual internet connectivity, user competence, and the possibility of physical access to the device.
- Identify misuse scenarios (Threat Modeling). Drop generic lists. Use proven engineering methods and tools for risk estimation to focus on precise attack vectors tailored to your industry.
- Quantify non-financial consequences. Assess the risk of production downtime, threats to health, or breaches of regulatory obligations.
- Ask the control question. As the manufacturer, can you prevent, detect, and respond quickly to the identified misuse scenario?
- Design proportionate protective mechanisms. Decisions on encryption or network segmentation must follow directly from the answers given in the previous steps.
- Design the maintenance process (Lifecycle). Document the security patch delivery policy, the support period, and customer communication channels.
Summary: What should you have after a solid risk assessment?
The outcome of a proper product cybersecurity risk assessment should never be a text-heavy poem for an auditor. This work must produce tangible artifacts: a clear map of system boundaries, hard architectural decisions in R&D, an approved budget for the update policy, and a coherent audit trail demonstrating compliance with EU directives.
A technologically mature organization no longer asks: “Are we 100% secure?”. Instead, it asks: “Where exactly are we exposed, and can we manage it over time?”.
Is your product ready for the new regulations?
Safety is a process, not a one-off certificate. If you don’t want risk assessment in your company to be just “paperwork,” it’s worth taking a proactive approach. See how to genuinely prepare your product for the Cyber Resilience Act (CRA) in 2026-2027 before official harmonised standards are published.
Cyber risk assessment for products: Why are most security analyses seemingly correct yet practically useless?
Because it usually meets compliance requirements, but it does not influence engineering decisions: architecture, upgrades, after-sales support, or supplier relationships. As a result, it is formally correct but practically irrelevant.
Not from a list of abstract hazards, but from the conditions of use in which the product can become a carrier of risk, and from whether the manufacturer is able to control that risk throughout the entire life cycle. This approach is consistent with the regulatory perspective reflected, among others, in the CRA, the MDR, and the Machinery Regulation 2023/1230.
A technical vulnerability is a specific flaw in software, configuration, or design (e.g., an authentication weakness) and may be recorded, for example, as a CVE. Product risk arises only in the context of real-world use, integration, user behavior, update availability, and the manufacturer’s ability to respond.
No — a poorly chosen mechanism can increase risk because it creates new vectors for operational problems. The examples in the text show that MFA in OT, automatic updates in IoMT, or “locking down” interfaces in IoT can lead to workarounds that bypass security controls or to operational and regulatory risks.
This includes, among other things, copying the corporate IT model into the product world, relying on generalities instead of scenarios (“who, how, for what purpose, and with what effect”), and overlooking the life cycle and End-of-Life issues. The result is an analysis that does not indicate real design decisions or maintenance actions.