没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
试读
23页
Gartner 预测人工智能将以积极的方式持久地破坏网络安全,但也会造成许多短期的幻灭。安全和风险管理领导者需要接受 2023 年只是生成式 AI 的开始,并为其演变做好准备。 生成式人工智能 (GenAI) 是一系列公认的颠覆性技术中的最新技术,有望满足组织通过任务自动化大幅提高所有团队生产力指标的持续愿望。 如今,安全产品中内置的大多数GenAI功能都专注于向现有产品添加自然语言界面,以提高效率和可用性,但完全自动化的承诺开始出现。过去尝试完全自动化复杂的安全活动(包括使用机器学习技术)很少能完全成功,并且在今天可能会造成浪费的干扰,并带来短期的幻灭。 Gen AI正处于炒作的顶峰,根据当今的技术状况做出了非常激进的预测。这导致了不切实际的颠覆性主张,但也忽略了 GenAI 进化的后续步骤,例如多模式模型和复合人工智能。 网络安全供应商对生成式人工智能的最初尝试只能让人们对该技术的前景有一个有限的了解,并且可能并不是未来发展的最佳迹象。
资源推荐
资源详情
资源评论
Gartner, Inc. | G00800663 Page 1 of 23
Predicts 2024: AI & Cybersecurity — Turning
Disruption Into an Opportunity
Published 4 December 2023 - ID G00800663 - 27 min read
By Analyst(s): Jeremy D'Hoinne, Avivah Litan, Nader Henein, Mark Horvath, Akif Khan,
Robertson Pimentel, Bart Willemsen, Dennis Xu, William Dupre
Initiatives: Cyber Risk; Meet Daily Cybersecurity Needs
Gartner predicts that AI will durably disrupt cybersecurity in
positive ways, but also create many short-term disillusions.
Security and risk management leaders need to accept that 2023
was only the starter for generative AI, and prepare for its
evolutions.
Overview
Key Findings
Generative AI (GenAI) is the latest technology in a long line of proclaimed disruptive
technologies promising to fulfill the ongoing desire for organizations to drastically
increase productivity metrics for all teams via automation of tasks.
■
Today, most GenAI functions built into security products are focused on adding
natural language interfaces to existing products to improve efficiency and usability,
but promises of full automation start to appear. Past attempts to fully automate
complex security activities, including using machine learning techniques, have rarely
been entirely successful and can be a wasteful distraction today, and with short-term
disillusions.
■
GenAI is at peak hype, driving very aggressive predictions based on the state of the
technology today. This leads to unrealistic disruption claims, but also ignores next
steps in GenAI evolution, such as multimodal models and composite AI.
■
The initial forays by cybersecurity vendors into generative AI offer only a limited
glimpse of the technology's promise and might not be the best indication of what the
future could be.
■
This research note is restricted to the personal use of chenlizhen@qianxin.com.
Gartner, Inc. | G00800663 Page 2 of 23
Recommendations
Security and risk management (SRM) leaders in charge of developing cybersecurity
roadmap should:
Strategic Planning Assumptions
By 2028, multiagent AI in threat detection and incident response will rise from 5% to 70%
of AI implementations to primarily augment, not replace staff.
Through 2025, generative AI will cause a spike of cybersecurity resources required to
secure it, causing more than a 15% incremental spend on application and data security.
By 2026, 40% of development organizations will use the AI-based autoremediation of
insecure code from AST vendors as a default, up from less than 5% in 2023.
By 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of
enterprises will no longer consider such identity verification and authentication solutions
to be reliable in isolation.
By 2028, the adoption of generative augments will collapse the skills gap, removing the
need for specialized education from 50% of entry-level cybersecurity positions.
Construct a multiyear approach for progressively integrating GenAI features and
products when they augment security workflows. Start with application security and
security operations.
■
Evaluate efficiency gains in tandem with the cost of GenAI implementations, and
refine your detection and productivity metrics to account for new GenAI cybersecurity
features.
■
Prioritize investments in AI augmentation of the workforce, not just task automation.
Prepare for short-term increased spend and long-term skill requirements changes
due to GenAI. Monitor potential shift in attack success due to GenAI.
■
Account for potential privacy challenges and balance expected benefits, with risks
associated with cumulative cost in the valuation of large-scale GenAI adoption in
security.
■
This research note is restricted to the personal use of chenlizhen@qianxin.com.
Gartner, Inc. | G00800663 Page 3 of 23
Analysis
What You Need to Know
Predictions are statements of Gartner’s positions and actionable advice about the future.
This research highlights Gartner Predicts relevant for security and risk management
leaders who have to navigate aggressive claims that GenAI is disrupting cybersecurity.
Past experiences lead to skepticism given previous “AI washing,” which caused expensive
investments that didn’t deliver expected results.
In 4 Ways Generative AI Will Impact CISOs and Their Teams, Gartner gives
recommendations on areas of immediate focus for security leaders:
Excessive hype damages our perception of time and balance, but roadmap planning
requires that cybersecurity leaders factor in all possibilities, without a strong fact base
that balances cybersecurity realities with GenAI hopes or promises (see Figure 1).
Manage the consumption of hosted and embedded GenAI applications.
■
Update application security practices to AI applications, using AI trust, risk and
security management (AI TRiSM) technologies.
■
Assess the first wave of GenAI announcements from cybersecurity providers, and put
a plan to integrate new features and products when they are more mature.
■
Acknowledge that malicious actors will also use GenAI and be prepared for
unpredictable changes in the threat landscape.
■
This research note is restricted to the personal use of chenlizhen@qianxin.com.
Gartner, Inc. | G00800663 Page 4 of 23
Figure 1: Balancing Cybersecurity Reality with GenAI Hopes
The cybersecurity industry has long been obsessed with fully automated solutions. The
hype surrounding GenAI already led to unrealistic promises, potentially damaging the
credibility of longer-term improvements coming from future features and products.
2023 was the year of GenAI announcements, 2024 should be the
year of minimum viable products; 2025 might be the first year of
GenAI integration in security workflows delivering real value.
As stated in the Hype Cycle for Generative AI, 2023, “Several innovations have a five- to
10-year period to mainstream adoption.” This is the case for “autonomous agents” and
Gartner believes that cybersecurity leaders focusing on human augmentation will achieve
better results than those jumping too quickly on solutions promising full automation.
In the shorter term, we’ll observe expansions of cybersecurity use cases from experiments
of multimodal GenAI (i.e., learning from more than text content) and will improve our
ability to measure productivity gains (see Innovation Insight: Multimodal AI Explained).
This research note is restricted to the personal use of chenlizhen@qianxin.com.
Gartner, Inc. | G00800663 Page 5 of 23
Strategic Planning Assumptions
Strategic Planning Assumption: By 2028, multiagent AI in threat detection and incident
response will rise from 5% to 70% of AI implementations to primarily augment, not replace
staff.
Analysis by: Jeremy D’Hoinne, Dennis Xu
Key Findings:
Near-Term Flag:
Through 2024, less than a third of generative cybersecurity AI implementation will lead to
security operation productivity improvements for enterprises, generating more spend.
By 2026, the emergence of new approaches, such as “action transformers,” combined with
more mature GenAI techniques will drive semiautonomous platforms that will significantly
augment tasks executed by cybersecurity teams.
Market Implications:
Building strong security operations is difficult, even for larger and well-funded
organizations. Picking the right mix of tools, services and internal staff will suffer if
cybersecurity teams invest time on tools that don’t deliver to their promise of automation.
More than a third of the first wave of announcements on GenAI in cybersecurity
relate to security operation activities. Touted capabilities range from basic
interactive help prompts to new dedicated product announcements aimed at
becoming the primary interface for incident response and posture assessments.
■
Full automation of threat detection, alert triage and incident responses are the “reach
the moon” objectives of many threat detection, investigation and response (TDIR)
initiatives.
■
History often repeats and GenAI sparks the same overly-optimistic hopes for security
operations, similar to what unsupervised machine learning did for threat detection
more than five years ago.
■
Conversely, teams with a higher maturity might imprudently dismiss generative
cybersecurity AI, based on the early and immature implementations of large
language models (LLMs) in the form of “SOC assistants” prompts.
■
This research note is restricted to the personal use of chenlizhen@qianxin.com.
剩余22页未读,继续阅读
资源评论
lurenjia404
- 粉丝: 1714
- 资源: 115
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功