Insights · Report · Security · Jan 2026
Control coverage trends, identity attack paths, and detection engineering capacity across mid-market and large enterprises.

The cybersecurity threat landscape entering 2026 demands that security leaders move beyond compliance checklists and toward continuous posture assessment. Point-in-time audits, while necessary for regulatory obligations, consistently fail to capture the dynamic nature of modern attack surfaces. This snapshot aggregates telemetry, interview findings, and maturity assessment data from over one hundred mid-market and large enterprise engagements conducted during the second half of 2025 to provide an empirical view of where organizations actually stand.
Posture assessment differs from vulnerability scanning in both scope and strategic intent. Where vulnerability management catalogs individual weaknesses, posture measurement evaluates the interplay between controls, processes, staffing, and governance. An organization may maintain a low vulnerability count yet still exhibit dangerous gaps in detection coverage, incident response readiness, or identity hygiene. This report examines each of those dimensions independently and then maps their interactions to produce composite maturity clusters.
Identity continues to dominate incident root causes across every industry vertical we observed. Credential theft, session hijacking, and OAuth token abuse accounted for over sixty percent of initial access vectors in the engagements analyzed. Attackers increasingly chain low-privilege footholds into lateral movement paths that exploit dormant service accounts, overly broad role assignments, and stale group memberships. The pattern is consistent: adversaries prefer stealing credentials to exploiting software vulnerabilities because identity misconfigurations are abundant and detection is often weak.
Organizations that invested in continuous access review and privileged session monitoring showed measurably shorter dwell times in tabletop and live incident outcomes. Continuous access review, distinct from quarterly recertification campaigns, involves automated policy engines that evaluate entitlement drift against baseline role definitions on a daily or weekly cycle. When combined with just-in-time privilege elevation, these programs reduce the window during which a compromised account can traverse high-value systems from weeks to hours.
We can present findings in a working session, map recommendations to your portfolio and risk register, and help you prioritize next steps with clear owners and timelines.
Hybrid identity environments introduce compounding complexity. Enterprises operating both on-premises Active Directory and one or more cloud identity providers must reconcile trust boundaries, conditional access policies, and federation configurations that attackers actively probe. Misaligned password policies between directories, inconsistent multifactor enforcement across federated trust paths, and shadow IT applications registered outside corporate tenant governance create exploitable seams. Security teams should treat identity topology mapping as a recurring exercise, not a one-time project.
Detection engineering hiring lags alert volume across nearly every organization in our dataset. The median security operations center processes over ten thousand alerts per day, yet fewer than thirty percent of respondents employ a dedicated detection engineer. The remaining organizations rely on vendor-supplied detection rules with minimal tuning, resulting in high false positive rates that erode analyst morale and mask genuine threats beneath alert noise. Closing this staffing gap is among the most impactful investments a security program can make.
Mature detection programs centralize content lifecycle management using version-controlled repositories that treat detection logic as code. Each detection rule carries metadata describing the MITRE ATT&CK technique it targets, its expected false positive rate, data source dependencies, and validation status. This approach enables teams to measure detection coverage against known adversary behaviors systematically rather than reacting to the latest threat intelligence bulletin with ad hoc rule creation. Organizations at the highest maturity tier maintain coverage maps that quantify gaps per technique and prioritize engineering sprints accordingly.
Purple team exercises serve as the critical feedback loop between offensive insight and defensive capability. Rather than treating red team engagements as pass-fail audits, leading programs run collaborative purple team sessions on a quarterly cadence. During these sessions, red operators execute specific technique chains while blue defenders validate that sensors fire, alerts propagate, and runbooks produce the expected containment actions. Findings feed directly back into the detection content backlog, creating a virtuous cycle of continuous improvement that outperforms annual penetration testing alone.
Endpoint detection and response adoption has plateaued at high levels among large enterprises, with over ninety percent deploying at least one EDR agent across managed endpoints. However, coverage gaps persist in operational technology environments, contractor-managed devices, and legacy systems running unsupported operating systems. Mid-market organizations trail by roughly fifteen percentage points, often citing agent performance overhead and licensing costs as barriers. The real risk lies not in the absence of EDR but in the lack of tuning and response automation layered on top of it.
Cloud security posture management remains one of the weakest control domains in our dataset. Fewer than forty percent of organizations with significant cloud footprints operate a dedicated CSPM tool with policy-as-code enforcement. Misconfigured storage buckets, overly permissive network security groups, and unencrypted data stores continue to surface in nearly every cloud-focused assessment. The gap is partly cultural: cloud engineering teams accustomed to velocity resist guardrails they perceive as friction, and security teams lack the cloud-native fluency to embed controls into infrastructure-as-code pipelines without slowing deployments.

Network segmentation and zero trust adoption show encouraging momentum but remain unevenly implemented. Approximately half of large enterprises have deployed microsegmentation in at least one critical environment, yet only a fraction extend those policies to development and staging networks where lateral movement risk is equally consequential. Zero trust architectures, while widely discussed, are most often partially implemented, covering remote access and SaaS applications while leaving east-west traffic within data centers governed by legacy firewall rules that predate current threat models.
The divergence between mid-market and large enterprise security postures widened during 2025. Large enterprises continued to invest in security orchestration, automation, and response platforms that reduce mean time to containment. Mid-market organizations, constrained by smaller security budgets and teams of three to five analysts, increasingly depend on managed detection and response providers. This reliance is pragmatic but introduces vendor concentration risk that boards rarely evaluate. If a single MDR provider suffers a service disruption, dozens of its mid-market clients simultaneously lose detection visibility.
Vulnerability management and patch cadence data reveal a persistent gap between policy and practice. Eighty percent of organizations mandate critical patch application within fourteen days, yet only half consistently meet that target. The lag concentrates in middleware, database engines, and internal tooling that lacks automated update mechanisms. Compensating controls such as virtual patching through web application firewalls or intrusion prevention signatures partially mitigate exposure but should never substitute for a disciplined patching program.
Third-party and supply chain risk surfaced as a growing concern across both cohorts. Organizations average over three hundred technology vendors with some degree of network or data access, yet fewer than twenty percent maintain a continuously updated third-party risk register. Annual questionnaire-based assessments remain the dominant evaluation method despite well-documented limitations. Leading programs supplement questionnaires with continuous external attack surface monitoring, contract clauses requiring evidence of detection engineering maturity, and tabletop exercises that simulate supply chain compromise scenarios.
The snapshot includes anonymized maturity clusters so security leaders can benchmark their programs without exposing sensitive telemetry. Each cluster groups organizations by composite maturity score across five domains: identity governance, detection engineering, endpoint protection, cloud security posture, and network segmentation. Peer comparison within a cluster highlights the specific domains where an organization lags its cohort, enabling targeted investment rather than broad budget increases that spread resources too thin. The clustering methodology uses k-means segmentation on normalized scores, validated through silhouette analysis, to ensure that groupings reflect genuinely distinct posture profiles.
We recommend that security leaders use this snapshot as an input to board-level risk reporting and annual planning. Translating posture metrics into business language, such as the estimated dwell time reduction achievable through identity governance investment or the cost-per-incident differential between organizations with and without detection-as-code programs, equips leadership to make informed funding decisions. Security teams that anchor budget requests in empirical benchmarks rather than fear-based narratives consistently secure more sustainable, multi-year investment commitments.