在這個技術變革加速的時代,人工智能(AI)正以前所未有的速度改變企業的核心運營模式。此份報告圍繞空間計算、AI未來趨勢、智能硬件、IT升級、量子計算、智能核心六大主題展開深入探討,無論是企業決策者還是技術管理者,都可以從中獲取戰略性洞察,為未來的技術升級和數字化轉型做好準備。
Spatial Computing Takes Center Stage
What is the future of spatial computing?
With real-time simulations as just the start, new, exciting use cases can reshape industries ranging from health care to entertainment.
Kelly Raskovich, Bill Briggs, Mike Bechtel, and Ed Burns
Today’s ways of working demand deep expertise in narrow skill sets. Being informed about projects often requires significant specialized training and understanding of context, which can burden workers and keep information siloed.
This has historically been true especially for any workflow involving a physical component. Specialized tasks demanded narrow training in a variety of unique systems, which made it hard to work across disciplines.
One example is computer-aided design (CAD) software. An experienced designer or engineer can view a CAD file and glean much information about the project.
But those outside of the design and engineering realm—whether they’re in marketing, finance, supply chain, project management, or any other role that needs to be up to speed on the details of the work—will likely struggle to understand the file, which keeps essential technical details buried.
Spatial computing is one approach that can aid this type of collaboration. As discussed in Tech Trends 2024, spatial computing offers new ways to contextualize business data, engage customers and workers, and interact with digital systems.
It more seamlessly blends the physical and digital, creating an immersive technology ecosystem for humans to more naturally interact with the world.
For example, a visual interaction layer that pulls together contextual data from business software can allow supply chain workers to identify parts that need to be ordered and enable marketers to grasp a product’s overall aesthetics to help them build campaigns.
Employees across the organization can make meaning of and, in turn, make decisions with detailed information about a project in ways anyone can understand.
If eye-catching virtual reality (VR) headsets are the first thing that come to mind when you think about spatial computing, you’re not alone.
But spatial computing is about more than providing a visual experience via a pair of goggles.
It also involves blending standard business sensor data with the Internet of Things, drone, light detection and ranging (LIDAR), image, video, and other three-dimensional data types to create digital representations of business operations that mirror the real world.
These models can be rendered across a range of interaction media, whether a traditional two-dimensional screen, lightweight augmented reality glasses, or full-on immersive VR environments.
Spatial computing senses real-world, physical components; uses bridging technology to connect physical and digital inputs; and overlays digital outputs onto a blended interface (figure 1).
Spatial computing’s current applications are as diverse as they are transformative.
Real-time simulations have emerged as the technology’s primary use case.
Looking ahead, advancements will continue to drive new and exciting use cases, reshaping industries such as health care, manufacturing, logistics, and entertainment—which is why the market is projected to grow at a rate of 18.2% between 2022 and 2033.
The journey from the present to the future of human-computer interaction promises to fundamentally alter how we perceive and interact with the digital and physical worlds.
一、空間計算成為焦點
空間計算未來會是什么樣子? 從實時模擬的簡單應用開始,這項技術正逐漸改變從醫療到娛樂等多個行業。它不僅是一項新興技術,更是一種可能重新定義我們生活和工作的工具。
挑戰:信息孤島和協作難題
現在的工作方式要求員工在非常專業的領域具備深入的技能。如果不了解具體背景,想快速上手項目就很難。這在涉及到實際物理操作的工作中表現得尤其明顯。比如設計師或工程師能快速從CAD(計算機輔助設計)文件中看出項目的關鍵細節,但如果是非專業領域的人,比如營銷、財務、供應鏈或項目管理人員,就很難理解這些文件里的內容,這導致了信息被孤立,團隊協作受限。
空間計算如何改變這一現狀
空間計算能讓團隊協作變得更加簡單。空間計算可以用新的方式把業務數據可視化,讓客戶、員工更容易理解和互動。它能把物理和數字結合起來,打造一個沉浸式的技術環境,讓人與世界的交互更加自然。
舉個例子:空間計算可以通過一個可視化交互界面,直接從業務軟件中提取相關數據。比如,供應鏈工作人員可以快速找到需要訂購的零件,而營銷人員能更直觀地理解產品的外觀,從而更高效地制定推廣方案。通過這種方式,不同部門的人都能輕松獲取項目信息,快速做出決策,而不用因為專業壁壘而卡殼。
不僅僅是炫酷的VR頭顯
說到空間計算,很多人會想到那些看上去很酷的VR頭顯。但空間計算并不僅僅是“戴個眼鏡看畫面”這么簡單。它是將傳感器數據、物聯網、無人機、激光雷達(LIDAR)等技術整合起來,打造出能真實還原業務操作的數字模型。無論是傳統的二維屏幕、輕便的增強現實(AR)眼鏡,還是完全沉浸式的VR環境,都可以用來呈現這些模型。
空間計算的核心是感知真實的物理環境,使用技術將物理和數字連接起來,并將數字信息疊加到一個融合的界面上(見圖1)。
多元化應用,正在改變各行各業
現在,空間計算的應用已經覆蓋了多個領域,實時模擬是目前的主要應用場景之一。未來,隨著技術的進步,這項技術將推動更多創新,重新定義醫療、制造、物流和娛樂等行業。這也是為什么從2022年到2033年,空間計算市場預計每年將增長18.2%。
從現在到未來,空間計算將徹底改變我們與數字和物理世界互動的方式,讓工作和生活變得更高效、更有趣,同時也帶來更多新機會!
Now: Filled to the Rim with Sims
At its heart, spatial computing brings the digital world closer to lived reality. Many business processes have a physical component, particularly in asset-heavy industries, but, too often, information about those processes is abstracted, and the essence (and insight) is lost.
Businesses can learn much about their operations from well-organized, structured business data, but adding physical data can help them understand those operations more deeply. That’s where spatial computing comes in.
“This idea of being served the right information at the right time with the right view is the promise of spatial computing,” says David Randle, global head of go-to-market for spatial computing at Amazon Web Services (AWS). “We believe spatial computing enables more natural understanding and awareness of physical and virtual worlds.”
Advanced Simulations: A Primary Application
One of the primary applications unlocked by spatial computing is advanced simulations. Think digital twins, but rather than virtual representations that monitor physical assets, these simulations allow organizations to test different scenarios to see how various conditions will impact their operations.
Imagine:
? A manufacturing company where designers, engineers, and supply chain teams can seamlessly work from a single 3D model to craft, build, and procure all the parts they need.
? Doctors who can view true-to-life simulations of their patients’ bodies through augmented reality displays.
? An oil and gas company that can layer detailed engineering models on top of 2D maps.
The possibilities are as vast as our physical world is varied.
The Portuguese soccer club Benfica’s sports data science team, for example, uses cameras and computer vision to track players throughout matches and develop full-scale 3D models of every move its players make.
The cameras collect 2,000 data points from each player, and AI helps identify specific players, the direction they were facing, and critical factors that fed into their decision-making.
The data essentially creates a digital twin of each player, allowing the team to run simulations of how plays would have worked if a player was in a different position. X’s and O’s on a chalkboard are now three-dimensional models that coaches can experiment with.
“There’s been a huge evolution in AI pushing these models forward, and now we can use them in decision-making,” says Joao Copeto, chief information and technology officer at Sport Lisboa e Benfica.
This isn’t only about wins and losses—it’s also about dollars and cents. Benfica has turned player development into a profitable business by leveraging data and AI.
Over the past 10 years, the team has generated some of the highest player-transfer deals in Europe. Similar approaches could also pay dividends in warehouse operations, supply chain and logistics, or any other resource planning process.
Simulations in Medicine
Advanced simulations are also showing up in medical settings.
For instance:
? Virtual patient scenarios can be simulated as a training supplement for nurses or doctors in a more dynamic, self-paced environment than textbooks would allow.
? Fraser Health Authority in Canada has pioneered the use of simulation models to improve care by creating a system-wide digital twin.
This public health authority in British Columbia generated powerful visualizations of patient movement through different care settings and simulations to determine the impact of deploying different care models on patient access.
Although the work is ongoing, Fraser expects improvement in appropriate, need-based access to care through increased patient awareness of available services.
New: Data is the Differentiator
Enterprise IT teams will likely need to overcome significant hurdles to develop altogether-new spatial computing applications. They likely haven’t faced these hurdles when implementing more conventional software-based projects.
For one thing, data isn’t always interoperable between systems, which limits the ability to blend data from different sources.
Furthermore, the spaghetti diagrams mapping out the path that data travels in most organizations are circuitous at best, and building the data pipelines to get the correct spatial data into visual systems is a thorny engineering challenge.
Ensuring that data is of high quality and faithfully mirrors real-world conditions may be one of the most significant barriers to using spatial computing effectively.
Rethinking Spatial Data Management
David Randle of AWS notes that spatial data has not historically been well managed at most organizations, even though it represents some of a business’s most valuable information.
“This information, because it’s quite new and diverse, has few standards around it and much of it sits in silos, some of it’s in the cloud, most of it’s not,” says Randle. “This data landscape encompassing physical and digital assets is extremely scattered and not well managed. Our customers’ first problem is managing their spatial data.”
Taking a more systematic approach to ingesting, organizing, and storing this data, in turn, makes it more available to modern AI tools, and that’s where the real learnings begin.
Data Pipelines: The Fuel for Business
We’ve often heard that data is the new oil, but for an American oil and gas company, the metaphor is becoming reality thanks to significant effort in replumbing some of its data pipelines.
The energy company uses drones to conduct 3D scans of equipment in the field and its facilities, and then applies computer vision to the data to ensure its assets operate within predefined tolerances.
It’s also creating high-fidelity digital twins of assets based on data pulled from engineering, operational, and enterprise resource planning systems.
The critical piece? Data integration.
The energy giant built a spatial storage layer, using application program interfaces to connect to disparate data sources and file types, including machine, drone, business, and image and video data.
Few organizations today have invested in this type of systematic approach to ingesting and storing spatial data. Still, it’s a key factor driving spatial computing capabilities and an essential first step for delivering impactful use cases.
模擬技術大展身手
空間計算的核心,是讓數字世界更貼近現實生活。很多業務流程都跟物理世界相關,特別是在那些“重資產”行業中,但問題是,這些流程的信息常常被抽象化,關鍵的細節和洞察就這么丟掉了。
企業當然能從有條理、結構化的數據中學到不少東西,但如果再加入物理數據,他們對業務的理解會更加深入。這就是空間計算大顯身手的地方。
“空間計算的優勢在于:在正確的時間,以正確的方式,提供正確的信息?!?a href="http://www.xsypw.cn/tags/亞馬遜/" target="_blank">亞馬遜云服務(AWS)全球空間計算負責人David Randle說道,“我們相信,空間計算能幫助人們更自然地理解和感知真實世界和虛擬世界?!?/p>
高級模擬:空間計算的拿手好戲
空間計算解鎖了一個非常重要的應用——高級模擬。這不只是傳統的“數字孿生”概念,而是更進一步。除了虛擬化地監控物理資產,它還能讓企業測試各種情境,看看不同條件會如何影響業務運營。
舉幾個例子:
1、一個制造企業,設計師、工程師和供應鏈團隊可以一起通過同一個3D模型,完成設計、制造和采購所有零部件的任務。
2、醫生可以通過增強現實設備,查看幾乎真實還原的患者身體模型,更直觀地了解病情。
3、石油和天然氣公司可以把精細的工程模型直接疊加在二維地圖上,輕松規劃業務。
這些應用場景,可能性幾乎和物理世界一樣多樣。
再來看看一個特別的例子:葡萄牙足球俱樂部本菲卡的體育數據科學團隊,利用攝像機和計算機視覺技術,實時跟蹤球員的比賽動作,并為每名球員生成完整的3D模型。
這些攝像機能從每個球員身上收集2000多個數據點,而AI會幫忙識別球員的身份、他面對的方向,以及影響他決策的關鍵因素。通過這些數據,俱樂部實際上為每個球員創建了一個數字孿生,可以用來模擬“如果某個球員位置不同,比賽戰術會如何變化”。那些過去畫在戰術板上的“X”和“O”,現在成了教練可以隨意調整的3D模型。
“AI在推動這些模型方面真的有了巨大的進步,現在我們能用它們來幫助做出更好的決策?!盝oao Copeto,本菲卡俱樂部的首席信息和技術官這樣說道。
Multimodal AI Creates the Context
In the past, businesses couldn’t merge spatial and business data into one visualization, but that too is changing. As discussed in “What’s next for AI?” multimodal AI—AI tools that can process virtually any data type as a prompt and return outputs in multiple formats—is already adept at processing virtually any input, whether text, image, audio, spatial, or structured data types.
This capability will allow AI to serve as a bridge between different data sources, and interpret and add context between spatial and business data. AI can reach into disparate data systems and extract relevant insights.
This isn’t to say multimodal AI eliminates all barriers. Organizations still need to manage and govern their data effectively. The old saying “garbage in, garbage out” has never been more prescient. Training AI tools on disorganized and unrepresentative data is a recipe for disaster, as AI has the power to scale errors far beyond what we’ve seen with other types of software.
Enterprises should focus on implementing open data standards and working with vendors to standardize data types.
But once they’ve addressed these concerns, IT teams can open new doors to exciting applications. “You can shape this technology in new and creative ways,” says Johan Eerenstein, executive vice president of workforce enablement at Paramount.
Next: AI Is the New UI
Many of the aforementioned challenges in spatial computing are related to integration. Enterprises struggle to pull disparate data sources into a visualization platform and render that data in a way that provides value to the user in their day-to-day work. But soon, AI stands to lower those hurdles.
As mentioned above, multimodal AI can take a variety of inputs and make sense of them in one platform, but that could be only the beginning. As AI is integrated into more applications and interaction layers, it allows services to act in concert. As mentioned in “What’s next for AI?” this is already giving way to agentic systems that are context-aware and capable of executing functions proactively based on user preferences.
These autonomous agents could soon support the roles of supply chain manager, software developer, financial analyst, and more.
What will separate tomorrow’s agents from today’s bots will be their ability to plan ahead and anticipate what the user needs without even having to ask. Based on user preferences and historical actions, they will know how to serve the right content or take the right action at the right time.
When AI agents and spatial computing converge, users won’t have to think about whether their data comes from a spatial system, such as LIDAR or cameras (with the important caveat that AI systems are trained on high-quality, well-managed, interoperable data in the first place), or account for the capabilities of specific applications.
With intelligent agents, AI becomes the interface, and all that’s necessary is to express a preference rather than explicitly program or prompt an application.
Imagine a bot that automatically alerts financial analysts to changing market conditions or one that crafts daily reports for the C-suite about changes in the business environment or team morale.
All the many devices we interact with today, be they phone, tablet, computer, or smart speaker, will feel downright cumbersome in a future where all we have to do is gesture toward a preference and let context-aware, AI-powered systems execute our command. Eventually, once these systems have learned our preferences, we may not even need to gesture at all.
The Full Impact
The full impact of agentic AI systems on spatial computing may be many years out, but businesses can still work toward reaping the benefits of spatial computing. Building the data pipelines may be one of the heaviest lifts, but once built, they open up myriad use cases.
Autonomous asset inspection, smoother supply chains, true-to-life simulations, and immersive virtual environments are just a few ways leading enterprises are making their operations more spatially aware.
As AI continues to intersect with spatial systems, we’ll see the emergence of revolutionary new digital frontiers, the contours of which we’re only beginning to map out.
多模態AI創造上下文
過去,企業無法將空間數據和業務數據合并到一個可視化界面中,但這種局面正在改變。多模態AI——能夠處理幾乎任何類型數據輸入,并以多種格式輸出的AI工具——已經非常擅長處理各種輸入數據,包括文本、圖像、音頻、空間數據和結構化數據。
這種能力將使AI成為連接不同數據源的橋梁,幫助解釋并建立空間數據與業務數據之間的上下文關聯。AI可以深入分散的數據系統,提取相關洞察并為決策提供支持。
這并不意味著多模態AI可以消除所有障礙。企業仍需要有效管理和治理數據。俗話說“輸入垃圾,輸出垃圾”(Garbage in, garbage out),在AI時代,這句話比以往更加貼切。如果用混亂或不具代表性的數據訓練AI工具,其錯誤會被放大到前所未見的程度。因此,企業應優先實施開放的數據標準,并與供應商合作實現數據類型的標準化。
一旦這些問題解決,IT團隊就可以探索令人興奮的新應用領域?!澳憧梢杂眯路f且富有創造力的方式塑造這項技術。”派拉蒙公司(Paramount)負責員工賦能的執行副總裁Johan Eerenstein說道。
AI是全新的用戶界面
空間計算的許多挑戰都與數據集成相關。企業通常難以將分散的數據源整合到一個可視化平臺中,并以對日常工作有實際價值的方式呈現數據。但AI的加入將很快降低這些障礙。
正如前文提到的,多模態AI能夠處理各種輸入數據,并在一個平臺上進行解析,但這可能只是一個開始。隨著AI逐漸融入更多應用和交互層,它能讓服務之間形成協作。這種趨勢已經催生了具有上下文感知能力的自主系統,它們可以根據用戶偏好主動執行功能。
未來,這些自主智能代理將支持供應鏈管理、軟件開發、金融分析等角色。與如今的簡單聊天機器人不同,明天的AI代理將具備前瞻性規劃能力,能夠提前預測用戶需求,而不需要明確指令。基于用戶偏好和歷史行為,它們將能夠在恰當的時機提供合適的內容或采取正確的行動。
當AI代理與空間計算結合,用戶無需擔心數據是來自激光雷達(LIDAR)、攝像機,還是其他空間系統(前提是AI系統以高質量、管理良好且互通的數據為基礎訓練)。智能代理將使AI成為全新的界面,用戶只需表達一個偏好,而不需要明確編程或輸入復雜指令。
想象一下:
? 一個AI機器人能自動向金融分析師發出市場變化警報。
? 或者,它每天為管理層編寫關于業務環境變化或團隊士氣的報告。
未來,所有我們今天使用的設備——手機、平板、電腦、智能音箱——可能都會顯得笨拙。那時,我們只需通過一個簡單的手勢,甚至無需動作,就能讓這些上下文感知的AI系統完成命令。
邁向新數字前沿的第一步
雖然自主智能AI系統在空間計算中的全面影響可能還需要幾年時間才能實現,但企業已經可以開始利用空間計算帶來的好處。構建數據管道可能是最繁重的工作,但一旦完成,就能解鎖無數應用場景,比如:
? 自動資產檢測
? 更流暢的供應鏈
? 真實感更強的模擬
? 沉浸式虛擬環境
一些領先企業已經開始利用這些方式讓運營更具有空間感知能力。隨著AI與空間系統的不斷交匯,我們將看到新的數字領域的誕生,這些領域的輪廓我們現在還只是剛剛開始繪制。
What’s Next for AI?
While large language models continue to advance, new models and agents are proving to be more effective at discrete tasks. AI needs different horses for different courses.
The Speed of AI’s Advancement
Blink and you’ll miss it: The speed of artificial intelligence’s advancement is outpacing expectations.
Last year, as organizations scrambled to understand how to adopt generative AI, we cautioned Tech Trends 2024 readers to lead with need as they differentiate themselves from competitors and adopt a strategic approach to scaling their use of large language models (LLMs).
Today, LLMs have taken root, with up to 70% of organizations, by some estimates, actively exploring or implementing LLM use cases.1
Leading organizations are already considering AI’s next chapter. Instead of relying on foundation models built by large players in AI, which may be more powerful and built on more data than needed, enterprises are now thinking about implementing multiple, smaller models that can be more efficient for business requirements.2
LLMs will continue to advance and be the best option for certain use cases, like general-purpose chatbots or simulations for scientific research, but the chatbot that peruses your financial data to think through missed revenue opportunities doesn’t need to be the same model that replies to customer inquiries.
Put simply, we’re likely to see a proliferation of different horses for different courses.
A series of smaller models working in concert may end up serving different use cases than current LLM approaches. New open-source options and multimodal outputs (as opposed to just text) are enabling organizations to unlock entirely new offerings.3
In the years to come, the progress toward a growing number of smaller, more specialized models could once again move the goalposts of AI in the enterprise.
From Knowledge to Execution
Organizations may witness a fundamental shift in AI from augmenting knowledge to augmenting execution.
Investments being made today in agentic AI could upend the way we work and live by arming consumers and businesses with armies of silicon-based assistants.
Imagine AI agents that can carry out discrete tasks, like delivering a financial report in a board meeting or applying for a grant.
“There’s an app for that” could well become “There’s an agent for that.”
Now: Getting the Fundamentals Right
LLMs are undoubtedly exciting but require a great deal of groundwork.
Instead of building models themselves, many enterprises are partnering with companies like Anthropic or OpenAI, or accessing AI models through hyperscalers.?
According to Gartner, AI servers will account for close to 60% of hyperscalers’ total server spending.?
While some enterprises have found immediate business value in using LLMs, others remain wary about the accuracy and applicability of LLMs trained on external data.?
On an enterprise time scale, AI advancements are still in a nascent phase (crawling or walking, as noted last year). According to recent surveys by Deloitte, Fivetran, and Vanson Bourne, fewer than one-third of generative AI experiments have moved into production, often because organizations struggle to access or cleanse the data needed to run AI programs.
Data as the Foundation
According to Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report, 75% of surveyed organizations have increased their investments in data lifecycle management due to generative AI.?
? Data is foundational to LLMs, because bad inputs lead to worse outputs (“garbage in, garbage squared”).
? Data labeling costs can drive significant AI investments.?
While some AI companies scrape the internet to build the largest models possible, savvy enterprises create the smartest models possible, with better domain-specific data education.
Example:
LIFT Impact Partners, a Vancouver-based organization, is fine-tuning its AI-enabled virtual assistants to help new Canadian immigrants process paperwork.
“When you train it on your organization’s unique persona, data, and culture, it becomes significantly more relevant and effective,” says Bruce Dewar, president and CEO of LIFT Impact Partners.
Challenges with Data Enablement
Organizations surveyed by Deloitte noted challenges with:
? Scaling AI pilots
? Unclear regulations around sensitive data
? Questions about third-party licensed data usage
55% of organizations avoided certain AI use cases due to data-related issues, and an equal proportion are working to enhance data security.
Differentiation:
While out-of-the-box models offered by vendors can help, differentiated AI impact will likely require differentiated enterprise data.
Real-World Value
Two-thirds of organizations surveyed are increasing investments in generative AI after seeing strong value across industries, from:
? Insurance claims review
? Telecom troubleshooting
? Consumer segmentation tools
LLMs are also creating value in specialized use cases like space repairs, nuclear modeling, and material design.
New: Different Horses for Different Courses
While LLMs have vast use cases, they are not always the most efficient choice.
? LLMs require massive resources, focus primarily on text, and augment human intelligence rather than execute discrete tasks.
? Smaller, purpose-built models can better address specific needs.
Future:
In the next 18–24 months, enterprises will likely rely on a toolkit of AI models, including:
1. Small language models (SLMs)
2. Multimodal models
3. Agentic AI systems
These models will help organizations optimize specific tasks without relying on massive, general-purpose LLMs.
Example:
An SLM trained on inventory data could let employees retrieve insights quickly, avoiding manual processing of large datasets that can take weeks.
二、人工智能的下一步是什么?
大型語言模型(LLMs)還在不斷進化,但新的AI模型和代理(agents)在某些特定任務上表現得更加高效。用一句俗話來說:“對癥下藥,才能事半功倍?!?/p>
就在去年,各企業還在努力弄清楚如何擁抱生成式AI時,我們就提醒過讀者,要以實際需求為導向,用戰略性的方式將大型語言模型(LLMs)落地應用。
如今,LLMs已經廣泛應用,有數據估計多達70%的企業正在探索或實施LLM的用例。
然而領先的企業已經開始考慮AI的下一個階段:與其依賴那些由AI巨頭打造的超大型基礎模型——雖然它們功能強大、數據豐富,但往往超出實際需求——不如部署多個小型模型,更高效地滿足特定業務需求。
LLMs仍然是某些場景(比如通用聊天機器人或科學研究模擬)的最佳選擇,但一個用于分析財務數據、尋找收入增長點的聊天機器人,真的需要和一個解答客戶詢問的機器人用同樣的模型嗎?
換句話說,我們會看到不同任務用不同AI模型的趨勢。
一系列小型模型可以協同工作,服務于當前LLM難以覆蓋的用例。開源模型的普及和多模態輸出(不僅限于文本)正在幫助企業解鎖全新的服務和產品。
未來幾年,小型、更專業化的模型會進一步推動AI在企業中的發展,并讓AI的“規則”再次被重新定義。
從“增強知識”到“增強執行”
我們可能會看到AI從“幫助獲取知識”逐步轉向“幫助完成任務”的根本轉變。
目前在開發中的代理型AI(Agentic AI),正是這種趨勢的代表。這些智能代理有望顛覆我們的工作和生活方式,為消費者和企業提供一個“硅基助理軍團”。
想象一下:
? 一個AI代理可以在董事會會議上呈現財務報告,甚至幫你申請一筆資金撥款。
? 我們常說的“有什么應用(app)可以解決這個問題?”可能會演變成“有一個AI代理可以幫你搞定”。
現在:打好基礎是關鍵
雖然LLMs令人興奮,但要真正落地還需要扎實的基礎工作。
許多企業沒有自行開發模型,而是選擇與Anthropic或OpenAI等公司合作,或者通過云計算巨頭(Hyperscalers)使用AI模型。
根據Gartner的預測,AI服務器的支出將占云計算巨頭總服務器支出的近60%。
一些企業已經從LLMs中找到了直接的業務價值,但也有企業對基于外部數據訓練的LLMs的準確性和適用性心存顧慮。
現實是:
目前的AI發展階段還很早,類似“嬰兒學爬”或“剛學走路”的階段。根據德勤、Fivetran和Vanson Bourne的調查,只有不到三分之一的生成式AI實驗進入了生產階段,主要原因是企業在獲取或清理運行AI所需數據時遇到了困難。
數據是AI的基石
根據德勤的2024年第三季度《企業生成式AI狀態報告》,75%的受訪企業因生成式AI而增加了在數據生命周期管理上的投資。
? 數據是LLMs的基礎,輸入數據差,輸出結果會更糟(俗話說:“垃圾進,垃圾出”)。
? 數據標注成本也是AI投資的一個重要因素。
雖然一些AI公司通過抓取互聯網數據來構建大規模模型,但聰明的企業更傾向于創建“更智能的模型”,通過領域專用數據進行更好的訓練。
案例:
位于溫哥華的LIFT Impact Partners是一家為非營利組織提供資源的機構,他們用經過優化的數據訓練AI虛擬助手,幫助新移民辦理加拿大的移民手續。
“當你用組織獨特的個性、數據和文化去訓練AI,它會變得更加貼合實際,更高效?!盠IFT的總裁兼首席執行官Bruce Dewar說道,“它不只是工具,更像是企業的延伸和代言人。”
數據面臨的挑戰
企業在AI落地過程中還面臨以下數據相關的挑戰:
? 如何讓AI試點項目順利擴展?
? 對敏感數據的模糊法規?
? 外部數據(比如第三方許可數據)的使用問題?
調查顯示:
55%的企業因為數據問題避免了某些AI用例,同時同樣比例的企業正在加強數據安全。
解決之道:
雖然使用供應商提供的“開箱即用”模型可以繞過部分問題,但要實現差異化的AI價值,企業需要獨特的企業數據。
AI的現實價值
盡管有挑戰,AI帶來的回報也非常顯著:
1、三分之二的企業因為已經看到了強大的業務價值而增加了對生成式AI的投資。
2、AI已經在保險索賠審核、電信故障排查、消費者分層分析等領域展現了現實價值。
3、在更專業的場景中,LLMs也有建樹,比如太空維修、核反應模擬和材料設計。
不同任務,用不同AI模型
LLMs覆蓋了廣泛的用例,但它們并不是萬能的。
1、LLMs需要龐大的資源,主要用于處理文本,并且更擅長“增強人類智能”,而非執行具體任務。
2、小型語言模型(SLMs)和多模態模型可能更適合某些特定需求。
未來趨勢:
在接下來的18-24個月內,企業可能會采用多種AI模型組合的方式,包括:
1. 小型語言模型(SLMs)
2. 多模態模型
3. 代理型AI系統
案例:
一家企業可以用庫存數據訓練一款SLM,讓員工快速獲得洞察,而不是手動處理大量數據——這可能需要數周時間。
通過這種方式,AI不僅變得更靈活,還讓企業在效率和成本之間找到更好的平衡點。
Naveen Rao, vice president of AI at Databricks, believes more organizations will take this systems approach with AI:
“A magic computer that understands everything is a sci-fi fantasy. Rather, in the same way we organize humans in the workplace, we should break apart our problems. Domain-specific and customized models can then address specific tasks, tools can run deterministic calculations, and databases can pull in relevant data. These AI systems deliver the solution better than any one component could do alone.”
An added benefit of smaller models is that they can be run on-device and trained by enterprises on smaller, highly curated data sets to solve more specific problems, rather than general queries, as discussed in “Hardware is eating the world.”
Companies like Microsoft and Mistral are currently working to distill such SLMs, built on fewer parameters, from their larger AI offerings, and Meta offers multiple options across smaller models and frontier models.
Finally, much of the progress happening in SLMs is through open-source models offered by companies like Hugging Face or Arcee.AI. Such models are ripe for enterprise use since they can be customized for any number of needs, as long as IT teams have the internal AI talent to fine-tune them.
In fact, a recent Databricks report indicates that over 75% of organizations are choosing smaller open-source models and customizing them for specific use cases. Since open-source models are constantly improving thanks to the contributions of a diverse programming community, the size and efficiency of these models are likely to improve at a rapid clip.
Humans interact through a variety of mediums: text, body language, voice, videos, among others. Machines are now hoping to catch up.
Given that business needs are not contained to text, it’s no surprise that companies are looking forward to AI that can take in and produce multiple mediums.
In some ways, we’re already accustomed to multimodal AI, such as when we speak to digital assistants and receive text or images in return, or when we ride in cars that use a mix of computer vision and audio cues to provide driver assistance.
Multimodal generative AI, on the other hand, is in its early stages. The first major models, Google’s Project Astra and OpenAI’s GPT-4 Omni, were showcased in May 2024, and Amazon Web Services’ Titan offering has similar capabilities.
Progress in multimodal generative AI may be slow because it requires significantly higher amounts of data, resources, and hardware. In addition, the existing issues of hallucination and bias that plague text-based models may be exacerbated by multimodal generation.
Still, the enterprise use cases are promising:
The notion of “train once, run anywhere (or any way)” promises a model that could be trained on text, but deliver answers in pictures, video, or sound, depending on the use case and the user’s preference, which improves digital inclusion.
Example Applications:
? Companies like AMD aim to use the fledgling technology to quickly translate marketing materials from English to other languages or to generate content.
? For supply chain optimization, multimodal generative AI can be trained on sensor data, maintenance logs, and warehouse images to recommend ideal stock quantities.
This also leads to new opportunities with spatial computing, as written about in “Spatial computing takes center stage.”
As the technology progresses and model architecture becomes more efficient, we can expect to see even more use cases in the next 18 to 24 months.
Agentic AI
The third new pillar of AI may pave the way for changes to our ways of working over the next decade.
Large (or small) action models go beyond the question-and-answer capabilities of LLMs and complete discrete tasks in the real world.
Examples:
? Booking a flight based on your travel preferences.
? Providing automated customer support that can access databases and execute needed tasks—likely without the need for highly specialized prompts.
The proliferation of such action models, working as autonomous digital agents, heralds the beginnings of agentic AI, and enterprise software vendors like Salesforce and ServiceNow are already touting these possibilities.
Enterprise Use Case: ServiceNow’s Xanadu Platform
Chris Bedi, chief customer officer at ServiceNow, believes that domain- or industry-specific agentic AI can change the game for human and machine interaction in enterprises.
For instance, in the company’s Xanadu platform, one AI agent can:
1. Scan incoming customer issues against a history of incidents to recommend next steps.
2. Communicate with another autonomous agent that executes those recommendations.
A human reviewer oversees the agent-to-agent communication to approve the hypotheses.
Other Use Cases:
? One agent could manage workloads in the cloud.
? Another agent could handle customer orders.
“Agentic AI cannot completely take the place of a human,” says Bedi, “but what it can do is work alongside your teams, handling repetitive tasks, seeking out information and resources, doing work in the background 24/7, 365 days a year.”
Liquid Neural Networks: A New AI Frontier
Aside from the categories of AI models noted above, advancements in AI design and execution are also impacting enterprise adoption—namely, the advent of liquid neural networks.
What are liquid neural networks?
? This cutting-edge technology offers greater flexibility by mimicking the human brain’s structure.
? Unlike traditional neural networks, which might require 100,000 nodes, liquid networks can accomplish tasks with just a couple dozen nodes.
Liquid neural networks are designed to run on less computing power with more transparency. This opens up possibilities for embedding AI into edge devices, robotics, and safety-critical systems.
In other words, it’s not just the applications of AI but also its underlying mechanisms that are ripe for improvement and disruption in the coming years.
小模型的系統化未來
Databricks副總裁Naveen Rao認為,越來越多的企業會用系統化的方法來發展AI:“那種‘萬能計算機無所不懂’的想法只是科幻電影的幻想。我們更應該像管理人類團隊一樣,把問題分解開。領域專屬和定制化模型可以解決具體任務,工具可以做確定性計算,數據庫則負責獲取相關數據。這些AI系統協同工作,提供的解決方案遠比單一組件要強大得多。”
小型模型(SLMs)的優勢之一是它們可以直接在設備上運行,而且企業可以用高度定制化的小型數據集來訓練這些模型,解決更具體的問題,而不是應對寬泛的需求。例如,微軟和Mistral正在開發這種精簡版的小型語言模型,而Meta則提供了多個小型模型和前沿模型供選擇。
此外,很多SLMs的進步來自開源模型,比如Hugging Face或Arcee.AI等公司提供的模型。這些開源模型非常適合企業使用,因為它們可以根據不同需求進行調整,只要企業的IT團隊擁有調試這些模型的AI人才即可。一份Databricks的報告顯示,超過75%的企業正在選擇小型開源模型,并將其定制用于具體場景。由于多樣化的開發者社區不斷改進這些開源模型,模型的效率和規模預計將快速提升。
多模態模型的崛起
人類通過多種方式交流,比如文本、肢體語言、語音和視頻等?,F在,機器也正在努力趕上這個水平。
企業的需求遠超文本數據,這就是為什么多模態AI開始成為大家關注的焦點。其實,我們已經接觸到了一些多模態AI的應用,比如,當我們和數字助手對話時,它可以以文本或圖像的形式回復我們;或者我們開車時,車輛通過計算機視覺和音頻提示提供駕駛輔助。
然而,多模態生成式AI還處于起步階段。2024年5月,谷歌的Project Astra、OpenAI的GPT-4 Omni,以及亞馬遜云服務(AWS)的Titan展示了早期的多模態AI技術。這些技術的進展較慢,原因在于它們需要大量的數據、資源和硬件支持。此外,目前文字生成AI存在的“幻覺”和偏見問題在多模態生成中可能更加突出。
企業應用前景:
多模態AI可以“一次訓練,多場景輸出”,比如基于文本數據訓練的模型可以根據用戶需求,以圖片、視頻或音頻的形式提供答案。這種能力不僅提升了用戶體驗,還促進了數字包容性。
具體場景:
1、企業可以用它將營銷材料快速從英文翻譯成其他語言,或者自動生成內容。
2、在供應鏈優化中,多模態AI可以結合傳感器數據、維護記錄和倉庫圖像,推薦最佳的庫存量。
隨著技術的發展和模型架構的效率提升,未來18到24個月內會看到更多新的應用場景。
代理型AI(Agentic AI)
AI的第三大趨勢可能在未來十年內徹底改變我們的工作方式。
代理型AI不僅能夠回答問題,還能完成現實世界中的具體任務。例如,幫助用戶根據個人偏好預訂航班,或者在無需復雜指令的情況下,提供自動化的客戶支持。這些模型作為自主數字代理的普及標志著代理型AI的開端,像Salesforce和ServiceNow這樣的企業軟件供應商,已經開始宣傳這些可能性。
企業案例:
ServiceNow的Xanadu平臺中,一個AI代理可以根據客戶問題的歷史記錄生成下一步建議,然后將這些建議傳遞給另一個代理來執行,而人類則只需在代理之間的溝通中進行審核。這種協作模式可以擴展到不同領域,比如一個代理專注于云端工作負載管理,另一個代理則負責為客戶下單。
ServiceNow的首席客戶官Chris Bedi表示:“代理型AI無法完全取代人類,但它可以成為團隊的好助手,處理重復性的任務、查找信息和資源,并在后臺全天候工作?!?/p>
液態神經網絡:AI技術的新突破
除了AI模型的種類,AI的設計和運行機制也在快速進步,比如液態神經網絡的出現。這種網絡擁有更高的靈活性,其訓練方法模仿了人腦的結構。與傳統網絡需要十萬個節點不同,液態神經網絡可能只需要幾十個節點就能完成類似的任務。
這種尖端技術不僅能顯著降低計算需求,還能提供更高的透明性,使得AI更適合嵌入邊緣設備、機器人和關鍵安全系統中。
換句話說,未來的AI不僅在應用場景上會帶來更多可能,它的底層技術也在醞釀新的顛覆。
Next: There’s an Agent for That
In the next decade, AI could be wholly focused on execution instead of human augmentation.
A future employee could make a plain-language request to an AI agent, for example:
“Close the books for Q2 and generate a report on EBITDA.”
Like in an enterprise hierarchy, the primary agent would then delegate the needed tasks to agents with discrete roles that cascade across different productivity suites to take action.
As with humans, teamwork could be the missing ingredient that enables the machines to improve their capabilities.
This leads to a few key considerations for the years to come (figure 2):
1. AI-to-AI Communication
Agents will likely have a more efficient way of communicating with each other than human language, as we don’t need human-imitating chatbots talking to each other.
Better AI-to-AI communication can enhance outcomes, as fewer people will need to become experts to benefit from AI. Rather, AI can adapt to each person’s communication style.
2. Job Displacement and Creation
Some claim that roles such as prompt engineer could become obsolete.
However, the AI expertise of those employees will remain pertinent as they focus on managing, training, and collaborating with AI agents as they do with LLMs today.
For example:
A lean IT team with AI experts might build the agents it needs in a sort of “AI factory” for the enterprise.
The significant shift in the remaining workforce’s skills and education may ultimately reward more human skills like creativity and design, as mentioned in previous Tech Trends.
3. Privacy and Security
The proliferation of agents with system access is likely to raise broad concerns about cybersecurity.
This will only become more important as time progresses and more of our data is accessed by AI systems.
New paradigms for risk and trust will be required to make the most out of applying AI agents.
4. Energy and Resources
AI’s energy consumption is a growing concern.
To mitigate environmental impacts, future AI development will need to balance performance with sustainability.
It will need to take advantage of improvements in liquid neural networks or other efficient forms of training AI—not to mention the hardware needed to make all of this work, as we discuss in “Hardware is Eating the World.”
5. Leadership for the Future
AI has transformative potential, as everyone has heard plenty over the last year, but only insofar as leadership allows.
Applying AI as a faster way of doing things the way they’ve always been done will result in:
? At best: Missed potential
? At worst: Amplified biases
Imaginative, courageous leaders should dare to take AI from calcified best practices to the creation of “next practices,” where we find new ways of organizing ourselves and our data toward an AI-enabled world.
Future Considerations: Data, Data, and More Data
When it comes to AI, enterprises will likely have the same considerations in the future that they do today:
Data, data, and data.
Until AI systems can reach artificial general intelligence or learn as efficiently as the human brain, they will remain hungry for more data and inputs to help them be more powerful and accurate.
Steps taken today to organize, streamline, and protect enterprise data could pay dividends for years to come, as data debt could one day become the biggest portion of technical debt.
Such groundwork should also help enterprises prepare for the litany of regulatory challenges and ethical uncertainties (such as data collection and use limitations, fairness concerns, and lack of transparency) that come with shepherding this new, powerful technology into the future.
The stakes of “garbage in, garbage out” are only going to grow:
It would be much better to opt for genius in, genius squared.
每個任務都可以有一個AI代理
在未來十年,AI可能完全專注于執行任務,而非僅僅增強人類的能力。
想象一下:
一位員工可以對AI代理發出簡單的指令,例如“完成第二季度的賬目并生成一份EBITDA(稅息折舊及攤銷前利潤)報告”。主代理會像企業的分層管理一樣,將任務分派給具有不同職責的代理,這些代理會跨多個生產力工具套件協同完成行動。
正如人類團隊合作能夠提升效率一樣,AI之間的團隊合作可能成為推動機器能力提升的關鍵因素。
以下是未來幾年需要考慮的幾個重點:
1. AI與AI之間的溝通
AI代理之間的溝通可能會比模仿人類語言更加高效。
我們不需要AI通過像人類一樣的聊天方式彼此交談,而是可以采用更加直接的機器語言進行溝通。這種方式能夠提高AI之間的協作效果,同時降低人類需要掌握AI專業知識的門檻。
最終,AI可以適應每個人的溝通風格,讓更多人無需成為專家,也能從AI中受益。
2. 工作的取代與創造
有人擔心像“提示工程師”這樣的角色可能會變得過時。但實際上,這些具有AI專業技能的員工仍然會很重要,他們的職責會轉向管理、訓練和與AI代理合作,就像他們現在處理大型語言模型(LLMs)一樣。
例如,一個精簡的IT團隊可以通過企業內的“AI工廠”打造它們所需的AI代理,來支持各類任務。
此外,隨著工作技能和教育需求發生顯著變化,人類具備的創造力和設計能力等技能可能會變得更加寶貴。這一點在之前的《科技趨勢》中已經提到過。
3. 隱私與安全
隨著AI代理的普及,它們對系統的訪問權限會引發更多的網絡安全問題。
這些問題會隨著時間的推移和AI對更多數據的訪問而變得更加重要。為了更好地利用AI代理,新的風險控制和信任管理范式將變得必不可少。
4. 能源與資源消耗
AI的能耗問題正在成為一個日益增長的關注點。
為了降低對環境的影響,未來的AI開發需要在性能與可持續性之間找到平衡。這可能需要利用液態神經網絡或其他高效的AI訓練方法,同時改進硬件技術(關于硬件,我們在《硬件正在吞噬世界》中有深入討論)。
5. 為未來培養領導力
AI擁有改變世界的潛力,但這種潛力能否實現,很大程度上取決于領導者的決策與視野。
如果AI只是被用來加速現有的工作方式,那么最多只能帶來潛力的浪費,最糟的情況下則可能放大現有的偏見。
真正有想象力和勇氣的領導者應該敢于將AI引入“下一代實踐(Next Practices)”,通過創造性的方式重新組織數據和工作流程,構建一個更高效、更智能的AI世界。
未來AI的核心依舊是數據
當談到AI時,未來企業依舊需要考慮三個核心問題:數據、數據,還是數據。
在AI系統能夠達到人工通用智能(AGI)或像人類大腦一樣高效學習之前,AI將始終需要更多的數據和輸入,來提升其能力和準確性。
今天為組織、優化和保護企業數據所做的努力,可能在未來多年里都會帶來巨大的回報。如果沒有做好這些基礎工作,企業可能面臨**“數據債務”**的積累,最終成為技術債務中最沉重的部分。
同時,這些數據準備工作還可以幫助企業應對AI帶來的各種監管挑戰和倫理問題,例如數據采集與使用的限制、公平性問題以及透明度不足等。
“垃圾進,垃圾出”的問題只會變得更加嚴重,而目標應該是“天才輸入,天才輸出”。如果企業能夠在數據上投入更多的努力,那么未來AI代理帶來的價值將是不可估量的。
Hardware is Eating the World
After years of “software eating the world,” it’s hardware’s turn to feast.
We previewed in the computation chapter of Tech Trends 2024 that as Moore’s Law comes to its supposed end, the promise of the AI revolution increasingly depends on access to the appropriate hardware.
Case in point: NVIDIA is now one of the world’s most valuable (and watched) companies, as specialized chips become an invaluable resource for AI computation workloads.1
According to Deloitte research based on a World Semiconductor Trade Statistics forecast, the market for chips used only for generative AI is projected to reach over US$50 billion this year.2
A Critical Hardware Use Case: AI-Embedded End-User and Edge Devices
Take personal computers (PCs), for instance. For years, enterprise laptops have been commodified. But now, we may be on the cusp of a significant shift in computing, thanks to AI-embedded PCs.
Companies like AMD, Dell, and HP are already touting the potential for AI PCs to:
? “Future-proof” technology infrastructure
? Reduce cloud computing costs
? Enhance data privacy
With access to offline AI models for image generation, text analysis, and speedy data retrieval, knowledge workers could be supercharged by faster, more accurate AI.
That being said, enterprises should be strategic about refreshing end-user computation on a large scale—there’s no use wasting AI resources that are limited in supply.
The Cost of Advancements: Sustainability in Data Centers
Of course, all of these advancements come at a cost.
Data centers are a new focus of sustainability as the energy demands of large AI models continue to grow.4
The International Energy Agency has suggested that the demands of AI will significantly increase electricity in data centers by 2026, equivalent to Sweden’s or Germany’s annual energy demands.5
A recent Deloitte study on powering AI estimates that global data center electricity consumption may triple in the coming decade, largely due to AI demand.6
Innovations in energy sources and efficiency are needed to make AI hardware more accessible and sustainable, even as it proliferates and finds its way into everyday consumer and enterprise devices.
Consider this: Unit 1 of the nuclear plant Three Mile Island, which was shut down five years ago due to economic reasons, will reopen by 2028 to power data centers with carbon-free electricity.7
Looking Forward: AI Hardware in IoT
AI hardware is poised to step beyond IT and into the Internet of Things (IoT).
An increasing number of smart devices could become even more intelligent as AI enables them to analyze their usage and take on new tasks (as agentic AI, mentioned in “What’s next for AI?” advances).
Today: Benign use cases, like AI in toothbrushes.
Tomorrow: Robust potential, like AI in lifesaving medical devices.
The true power of hardware could be unlocked when smarter devices bring about a step change in our relationship with robotics.
Now: Chips Ahoy!
A generation of technologists has been taught to believe software is the key to return on investment, given its scalability, ease of updates, and intellectual property protections.9
But now, hardware investment is surging as computers evolve from calculators to cogitators.10
We wrote last year that specialized chips like graphics-processing units (GPUs) were becoming the go-to resources for training AI models.
In its 2024 TMT Predictions report, Deloitte estimated that total AI chip sales in 2024 would be 11% of the predicted global chip market of US$576 billion.11
Growing from roughly US$50 billion today, the AI chip market is forecasted to reach up to US$400 billion by 2027, though a more conservative estimate is US$110 billion (figure 1).
三、硬件正在吞噬世界
在過去的幾年里,我們一直說“軟件正在吞噬世界”,但現在輪到硬件登場了。
隨著摩爾定律逐漸失效,AI革命的未來越來越依賴于合適的硬件資源。舉個例子:NVIDIA(英偉達)現已成為全球最具價值、最受關注的公司之一,因為專用芯片已成為AI計算任務中不可或缺的資源。
根據德勤基于“世界半導體貿易統計”預測的研究,僅用于生成式AI的芯片市場預計將在今年突破500億美元的規模。
企業硬件的關鍵用例:嵌入AI的終端設備
一個關鍵的硬件應用場景可能在于嵌入AI的終端用戶設備和邊緣設備。例如,個人電腦(PC)行業在過去多年中已經高度商品化,但隨著AI嵌入PC,我們可能正站在計算技術重大變革的起點。
AMD、戴爾、惠普等公司已經在宣傳AI PC的潛力,認為它們可以:
? “未來-proof”(未來適用)技術基礎設施
? 降低云計算成本
? 增強數據隱私
借助離線AI模型,知識工作者可以快速實現圖像生成、文本分析和數據檢索等功能,大幅提高工作效率和精度。
盡管如此,企業在大規模更新終端用戶設備時需要謹慎決策,因為AI資源是有限的,浪費它們沒有意義。
硬件背后的能源代價:可持續發展的壓力
當然,所有這些技術進步的背后都有代價。
隨著大型AI模型的能源需求不斷增長,數據中心正在成為可持續發展的新焦點。國際能源署(IEA)預測,到2026年,AI的能源需求將使數據中心的用電量大幅增加,達到與瑞典或德國全年用電量相當的水平。
德勤的研究也估計,未來十年內,由于AI需求的推動,全球數據中心的電力消耗可能會增加三倍。
為應對這一挑戰,需要在能源來源和能效創新方面進行突破,以使AI硬件既可用又可持續。比如,美國三里島核電站的1號機組,五年前因經濟原因關閉,但預計將在2028年重新開放,為數據中心提供無碳電力支持。
硬件未來的展望:從IT到物聯網
展望未來,AI硬件將從IT領域擴展到物聯網(IoT)。
越來越多的智能設備將變得更加智能,因為AI賦予它們分析自身使用情況并承擔新任務的能力(這一點在“AI的下一步是什么?”中提到的代理型AI將繼續推動)。
今天: 比如,AI被用在牙刷等看似普通的設備中。
明天: AI可能被嵌入救命的醫療設備中,其潛力遠超目前的應用。
當更智能的設備能夠與機器人技術相結合,這種硬件將真正釋放出改變我們生活的力量,重新定義人類與機器的關系。
芯片崛起的時代
長期以來,技術界普遍認為軟件是投資回報的關鍵,因為它具有可擴展性、易于更新和知識產權保護的優勢。
但現在,隨著計算機從“計算器”進化到“認知者”,硬件投資正在快速崛起。
我們去年曾提到,像圖形處理器(GPU)這樣的專用芯片正在成為訓練AI模型的首選資源。
根據德勤2024年的《TMT預測報告》,AI芯片市場預計將在2024年占全球芯片市場(5760億美元)總量的11%。
目前估計AI芯片市場約為500億美元,但到2027年,這一數字可能增長到4000億美元(較保守的預測為1100億美元,詳見圖1)。
Large Tech Companies and the Growing Demand for AI Hardware
Large tech companies are driving a portion of this demand, as they may build their own AI models and deploy specialized chips on-premises. However, enterprises across industries are seeking compute power to meet their IT goals.
For instance, according to a Databricks report, the financial services industry has had the highest growth in GPU usage, at 88% over the past six months, in running large language models (LLMs) that tackle fraud detection and wealth management.
All of this demand for GPUs has outpaced capacity. In today’s iteration of the Gold Rush, the companies providing “picks and shovels,” or the tools for today’s tech transformation, are winning big.
NVIDIA’s CEO Jensen Huang has noted that cloud GPU capacity is mostly filled, but the company is also rolling out new chips that are significantly more energy-efficient than previous iterations. Hyperscalers are buying up GPUs as they roll off the production line, spending almost $1 trillion on data center infrastructure to accommodate the demand from clients who rent GPU usage. All the while, the energy consumption of existing data centers is pushing aging power grids to the brink globally.
New Chips for a New Era: Neural Processing Units (NPUs)
Understandably, enterprises are looking for new solutions. While GPUs are crucial for handling the high workloads of LLMs or content generation, and central processing units are still table stakes, neural processing units (NPUs) are now in vogue.
NPUs, which mimic the brain’s neural network, can accelerate smaller AI workloads with greater efficiency and lower power demands. These chips enable enterprises to:
? Shift AI applications away from the cloud
? Apply AI locally to sensitive data that can’t be hosted externally
This new breed of chip is a crucial part of the future of embedded AI.
Vivek Mohindra, senior vice president of corporate strategy at Dell Technologies, notes:
“Of the 1.5 billion PCs in use today, 30% are four years old or more. None of these older PCs have NPUs to take advantage of the latest AI PC advancements.”
A major refresh of enterprise hardware may be on the horizon.
As NPUs enable end-user devices to run AI offline and allow models to become smaller to target specific use cases, hardware may once again become a differentiator for enterprise performance.
AI’s Transformative Potential
In a recent Deloitte study:
? 72% of respondents believe generative AI’s impact on their industry will be “high to transformative.”
Once AI becomes mainstream thanks to advancements in hardware, that number may edge closer to 100%.
New: Infrastructure is Strategic Again
The heady cloud-computing highs of assumed unlimited access are giving way to a resource-constrained era.
After being relegated to a utility for years, enterprise infrastructure (e.g., PCs) is once again strategic.
Specifically, specialized hardware will likely be crucial to three significant areas of AI growth:
1. AI-embedded devices and the Internet of Things (IoT)
2. Data centers
3. Advanced physical robotics
While the impact on robotics may occur over the next few years, enterprises will likely face decisions about the first two areas in the next 18 to 24 months.
1. Edge Computing Footprint
By 2025, more than 50% of data could be generated by edge devices.
As NPUs proliferate, more devices could run AI models without relying on the cloud. This trend is especially relevant as generative AI model providers focus on creating smaller, more efficient models for specific tasks.
With quicker response times, decreased costs, and greater privacy controls, hybrid computing (a mix of cloud and on-device AI workloads) may become essential for many enterprises. Hardware manufacturers are betting on it.
According to Dell Technologies’ Mohindra:
“Processing AI at the edge is one of the best ways to handle the vast amounts of data required. When you consider latency, network resources, and just sheer volume, moving data to a centralized compute location is inefficient, ineffective, and not secure. It’s better to bring AI to the data, rather than bring the data to AI.”
2. The Hardware Refresh is Coming
One major bank predicts that AI PCs will account for more than 40% of PC shipments by 2026.
Similarly, nearly 15% of 2024 smartphone shipments are expected to be capable of running LLMs or image-generation models.
HP’s Alex Thatcher compares this hardware refresh to the major transition from command-line inputs to graphical user interfaces in the 1990s:
“The software has fundamentally changed, replete with different tools and ways of collaborating. You need hardware that can accelerate that change and make it easier for enterprises to create and deliver AI solutions.”
Apple and Microsoft have also fueled this impending hardware refresh by embedding AI into their devices this year.
Strategic Hardware Adoption
As hardware choices proliferate, good governance will be crucial. Enterprises need to answer key questions:
? How many employees need next-generation devices?
? Which areas of the business will benefit most from these advancements?
Chip manufacturers are racing to improve AI horsepower, but enterprises can’t afford to refresh their entire edge footprint with every new advancement.
Instead, businesses should adopt a tiered strategy to ensure these devices are deployed where they can have the greatest impact.
大型科技公司推動AI硬件需求
大型科技公司正成為推動AI硬件需求的一部分,它們可能自建AI模型并部署專用芯片到本地。但事實上,各行各業的企業都在尋求更強的計算能力來實現它們的IT目標。
例如:
根據Databricks的一份報告,在運行大語言模型(LLMs)以處理欺詐檢測和財富管理任務時,金融服務業的GPU使用量在過去六個月內增長了88%,是增長最快的行業之一。
GPU需求超出供給:新的“淘金熱”
所有這些對GPU的需求已經遠遠超出了產能。在當今這個“新淘金熱”中,那些提供“鎬和鏟”的公司,也就是為技術轉型提供工具的企業,正在贏得大筆回報。
NVIDIA(英偉達)首席執行官黃仁勛表示,云GPU的容量幾乎已經用盡。不過,NVIDIA正在推出新一代的芯片,其能源效率顯著高于以往版本。
云計算巨頭(Hyperscalers)正在以驚人的速度購買剛剛下線的GPU,投資接近1萬億美元用于數據中心基礎設施,以滿足客戶對GPU使用的租賃需求。同時,現有數據中心的能源消耗也在將全球老舊的電網推向極限。
新一代芯片:神經處理單元(NPUs)
面對GPU需求激增的壓力,企業正在尋找新的解決方案。盡管GPU對于處理LLMs或內容生成的高工作負載至關重要,而CPU依然是基本配置,但**神經處理單元(NPUs)**正在迅速成為新熱點。
NPUs模擬大腦的神經網絡結構,可以以更高的效率和更低的功耗加速較小的AI工作負載。它們的優勢在于:
? 讓AI應用從云端轉移到本地運行
? 保護敏感數據,避免托管在外部平臺上
這類新型芯片是未來嵌入式AI的重要組成部分。
戴爾科技(Dell Technologies)戰略高級副總裁Vivek Mohindra表示:
“目前全球有15億臺PC,其中30%超過4年機齡。這些老舊PC都沒有NPUs,無法利用最新的AI PC功能?!?/p>
企業硬件可能迎來一次大規模的升級浪潮。隨著NPUs讓終端設備可以離線運行AI,同時讓AI模型更小、更貼合具體用例,硬件可能再次成為企業性能的差異化優勢。
AI的變革潛力
根據德勤的一項研究:
72%的受訪者認為生成式AI對其所在行業的影響將是“重大到變革性”。
隨著硬件的進一步普及,讓AI觸手可及,這一比例可能會接近100%。
新趨勢:企業基礎設施重回戰略核心
曾經,云計算給人一種“資源無限”的印象,但如今,我們正進入一個資源受限的時代。
在過去幾年里,企業基礎設施(例如PC)被視為一種“工具性資源”,但現在,它們再次成為戰略重點。
特別是,專用硬件在以下三個AI增長領域將尤為重要:
1. 嵌入AI的設備與物聯網(IoT)
2. 數據中心
3. 先進的物理機器人
盡管機器人領域的影響可能在未來幾年才會顯現,但企業在未來18到24個月內需要著手應對前兩個領域的相關決策。
1. 邊緣計算的崛起
到2025年,超過50%的數據可能會由邊緣設備生成。
隨著NPUs的普及,更多設備將能夠運行AI模型而無需依賴云計算。尤其是,生成式AI的提供商正在開發更小、更高效的模型,針對具體任務提供支持。
邊緣計算的優勢:
? 更快的響應時間
? 更低的成本
? 更強的隱私控制
混合計算(即云端與設備端AI工作負載相結合)可能成為許多企業的必備選項,而硬件制造商正在押注這一趨勢。
戴爾科技的Mohindra表示:
“從延遲、網絡資源以及數據量來看,將數據移到集中計算位置既低效又不安全。將AI帶到數據前線,而不是把數據送到AI前線,是更好的選擇。”
2. 硬件的升級浪潮即將到來
一家大型銀行預測,到2026年,AI PC將占PC出貨量的40%以上。
同時,預計到2024年,近15%的智能手機將能夠運行LLMs或圖像生成模型。
HP AI PC體驗與云端客戶高級總監Alex Thatcher表示:
“這次設備升級浪潮就像90年代從命令行輸入到圖形用戶界面的轉型一樣重大。軟件已經發生了根本性的變化,帶來了全新的工具和協作方式。企業需要能夠加速這種變化的硬件,以便更輕松地創建和交付AI解決方案。”
蘋果和微軟今年也通過將AI嵌入到它們的設備中,推動了即將到來的硬件升級潮。
企業的戰略硬件應用
隨著硬件選擇的增多,良好的治理將至關重要。企業需要問自己:
? 我們的員工中有多少人需要下一代設備?
? 哪些業務領域最需要這些硬件的支持?
盡管芯片制造商正在競相提升AI的算力,但企業無法在每次新技術發布時對所有設備進行全面升級。
相反,企業應該采取分級戰略,確保這些設備能夠部署在最需要的地方,以實現最大的影響。
IT, amplified: AI elevates the reach (and remit) of the tech function
As the tech function shifts from leading digital transformation to leading AI transformation, forward-thinking leaders are using this as an opportunity to redefine the future of IT.
Much has been said, including within the pages of Tech Trends, about the potential for artificial intelligence to revolutionize business use cases and outcomes. Nowhere is this more true than in the end-to-end life cycle of software engineering and the broader business of information technology, given generative AI’s ability to write code, test software, and augment tech talent in general.
Deloitte research has shown that tech companies at the forefront of this organizational change are ready to realize the benefits: They are twice as likely as their more conservative peers to say generative AI is transforming their organization now or will within the next year.
We wrote in a Tech Trends 2024 article that enterprises need to reorganize their developer experiences to help IT teams achieve the best results. Now, the AI hype cycle has placed an even greater focus on the tech function’s ways of working. IT has long been the lighthouse of digital transformation in the enterprise, but it must now take on AI transformation. Forward-thinking IT leaders are using the current moment as a once-in-a-generation opportunity to redefine roles and responsibilities, set investment priorities, and communicate value expectations.
More importantly, by playing this pioneering role, chief information officers can help inspire other technology leaders to put AI transformation into practice.
After years of enterprises pursuing lean IT and everything-as-a-service offerings, AI is sparking a shift away from virtualization and austere budgets. Gartner predicts that “worldwide IT spending is expected to total $5.26 trillion in 2024, an increase of 7.5% from 2023.”
As we discuss in “Hardware is eating the world,” hardware and infrastructure are having a moment, and enterprise IT spending and operations may shift accordingly. As both traditional AI and generative AI become more capable and ubiquitous, each of the phases of tech delivery may see a shift from human in charge to human in the loop. Organizations need a clear strategy in place before that occurs.
Based on Deloitte analysis, over the next 18 to 24 months, IT leaders should plan for AI transformation across five key pillars:
1. Engineering
2. Talent
3. Cloud financial operations (FinOps)
4. Infrastructure
5. Cyber risk
This trend may usher in a new type of lean IT
If commercial functions see an increased number of citizen developers or digital agents that can spin up applications on a whim, the role of the IT function may shift from building and maintaining to orchestrating and innovating.
In that case, AI may not only be undercover, as we indicate in the introduction to this year’s report, but may also be overtly in the boardroom, overseeing tech operations in line with human needs.
Now: Spotlight—and higher spending—on IT
For years, IT has been under pressure to streamline sprawling cloud spend and curb costs. Since 2020, however, investments in tech have been on the rise thanks to pent-up demand for collaboration tools and the pandemic-era emphasis on digitalization.
According to Deloitte research:
? From 2020 to 2022, the global average technology budget as a percentage of revenue jumped from 4.25% to 5.49%, an increase that approximately doubled the previous revenue change from 2018 to 2020.
? In 2024, US companies’ average budget for digital transformation as a percentage of revenue is 7.5%, with 5.4% coming from the IT budget.
As demand for AI sparks another increase in spending, the finding from Deloitte’s 2023 Global Technology Leadership Study continues to ring true: Technology is the business, and tech spend is increasing as a result.
Today, enterprises are grappling with the new relevance of hardware, data management, and digitization in ramping up their usage of AI and realizing its value potential.
In Deloitte’s Q2 State of Generative AI in the Enterprise report, businesses that rated themselves as having “very high” levels of expertise in generative AI were increasing their investment in hardware and cloud consumption much more than the average enterprise.
Overall, 75% of organizations surveyed have increased their investments around data-life-cycle management due to generative AI.
Tech investment strategies are critical
These figures point to a common theme: To realize the highest impact from gen AI, enterprises likely need to accelerate their cloud and data modernization efforts.
AI has the potential to deliver efficiencies in cost, innovation, and a host of other areas, but the first step to accruing these benefits is for businesses to focus on making the right tech investments.
Because of these crucial investment strategies, the spotlight is on tech leaders who are paving the way.
According to Deloitte research:
? Over 60% of US-based technology leaders now report directly to their chief executives, an increase of more than 10 percentage points since 2020.
This is a testament to the tech leader’s increased importance in setting the AI strategy rather than simply enabling it.
Far from a cost center, IT is increasingly being seen as a differentiator in the AI age, as CEOs, following market trends, are keen on staying abreast of AI’s adoption in their enterprise.
The future of IT: Leaner, more integrated, and faster
John Marcante, former global CIO of Vanguard and US CIO-in-residence at Deloitte, believes AI will fundamentally change the role of IT.
He says:
“The technology organization will be leaner, but have a wider purview. It will be more integrated with the business than ever. AI is moving fast, and centralization is a good way to ensure organizational speed and focus.”
IT is gearing up for transformation
As IT gears up for the opportunity presented by AI—perhaps the opportunity that many tech leaders and employees have waited for—changes are already underway in how the technology function organizes itself and executes work.
The stakes are high, and IT is due for a makeover.
四、IT能力大升級
隨著技術職能從引領數字化轉型轉向引領AI轉型,前瞻性的領導者正在利用這一機會重新定義IT的未來。
AI對IT的全面影響:軟件工程與技術職能
關于人工智能如何徹底改變業務場景和結果,業界已有許多討論?!犊萍稼厔荨范啻翁岬竭@一點,而在軟件工程全生命周期和信息技術業務中,這一點尤為真實。
生成式AI能夠編寫代碼、測試軟件并全面增強技術團隊的能力,這些優勢正在改變IT的工作方式。
根據德勤的研究,走在這一組織變革前沿的科技公司,已經準備好享受這一紅利:
它們比更保守的同行企業更有可能表示生成式AI正在或即將在一年內改變其組織。
我們在《科技趨勢2024》中提到,企業需要重新組織開發者的工作體驗,幫助IT團隊取得更好的成果。如今,AI的熱潮更進一步,將焦點聚集在IT職能的工作方式上。
IT長期以來一直是企業數字化轉型的燈塔,但現在,它必須承擔起AI轉型的責任。
前瞻性的IT領導者正在將這一時刻視為百年難得的機會,通過重新定義角色與職責、設定投資優先級和傳遞價值預期,全面推動組織變革。更重要的是,通過扮演這一先鋒角色,**首席信息官(CIO)**可以激勵其他技術領導者將AI轉型付諸實踐。
AI時代的技術支出趨勢
在企業長期追求精益IT和一切服務化的背景下,AI正在引發一場從虛擬化和縮減預算向新投資方向的轉變。
Gartner預測,到2024年,全球IT支出將達到5.26萬億美元,比2023年增長7.5%。
正如我們在《硬件正在吞噬世界》中討論的,硬件和基礎設施正成為焦點,企業的IT支出和運營可能因此發生相應變化。隨著傳統AI和生成式AI變得更加強大和普及,技術交付的每個階段可能從“以人為主導”逐步轉向“人類參與其中(Human in the Loop)”。
企業需要在這種轉變發生之前制定清晰的戰略。根據德勤的分析,未來18到24個月內,IT領導者應圍繞以下五大核心支柱制定AI轉型計劃:
1. 工程
2. 人才
3. 云財務運營(FinOps)
4. 基礎設施
5. 網絡風險
未來IT:從“建設者”到“創新者”
這場趨勢可能在未來十年催生一種新的精益IT模式。如果企業的商業職能中出現更多“公民開發者”或能夠隨時生成應用的數字代理,那么IT職能的角色可能會從構建與維護轉變為協調與創新。
這種情況下,AI不僅僅是隱藏在后臺的助推器,甚至可能直接參與到董事會層面的戰略決策中,與人類需求保持一致,監督技術運營。
IT支出的聚光燈下
多年來,IT一直承受著控制云支出的壓力。然而,自2020年以來,受疫情期間對協作工具的需求激增和數字化轉型的推動,技術投資呈現上升趨勢。
數據統計:
1、從2020年到2022年,全球企業的技術預算占收入比例從4.25%躍升至5.49%。
2、到2024年,美國企業的數字化轉型預算占收入的7.5%,其中5.4%來自IT預算。
隨著AI需求帶來新一輪支出增長,德勤2023年的《全球技術領導力研究》中提到的觀點依然成立:技術就是業務,因此技術支出也在不斷增加。
企業正在應對硬件需求、數據管理和數字化的新相關性,以加速AI的應用并釋放其價值潛力。根據德勤Q2生成式AI報告,認為自己在生成式AI方面具有“非常高”專業水平的企業,在硬件和云消費方面的投資比平均水平高出許多。
AI驅動的技術投資策略
75%的企業因生成式AI而增加了數據生命周期管理的投資。
這些數據指向一個共同主題:為了讓生成式AI發揮最大效用,企業需要加速云和數據現代化。AI有潛力在成本、創新和其他多個領域帶來高效益,但前提是企業必須專注于正確的技術投資策略。
由于這些關鍵的投資策略,技術領導者成為了關注的焦點。
根據德勤的研究,超過60%的美國技術領導者現在直接向首席執行官匯報,比2020年增加了10個百分點。這反映出技術領導者在制定AI戰略中的重要性,已從單純的技術支持角色轉變為戰略制定者。
IT不再只是成本中心,而是AI時代的差異化優勢,CEO們正密切關注AI在企業中的應用,以保持領先地位。
IT的未來:更精益、更融合、更快速
Vanguard(先鋒集團)前全球CIO兼德勤美國駐地CIO John Marcante認為,AI將從根本上改變IT的角色。他說:“技術團隊會變得更精簡,但覆蓋范圍更廣。它將與業務的融合程度比以往任何時候都高。AI發展速度很快,而集中化是確保組織速度與專注的最佳方式?!?/p>
IT的變革時刻已經到來
隨著IT為AI帶來的機遇做好準備,技術職能的組織方式和執行方式正在發生改變。這可能正是許多技術領導者和員工一直等待的機會。
但這場變革的代價很高,IT也即將迎來一場全面的“改頭換面”。
New: An AI boost for IT
Over the next 18 to 24 months, the nature of the IT function is likely to change as enterprises increasingly employ generative AI. Deloitte’s foresight analysis suggests that, by 2027, even in the most conservative scenario, gen AI will be embedded into every company’s digital product or software footprint (figure 1), as we discuss across five key pillars.
Engineering
In the traditional software development life cycle, manual testing, inexperienced developers, and disparate tool environments can lead to inefficiencies, as we’ve discussed in prior Tech Trends. Fortunately, AI is already having an impact on these areas. AI-assisted code generation, automated testing, and rapid data analytics all save developers more time for innovation and feature development. The productivity gain from coding alone is estimated to be worth US$12 billion in the United States alone.
At Google, AI tools are being rolled out internally to developers. In a recent earnings call, CEO Sundar Pichai said that around 25 percent of the new code at the technology giant is developed using AI. Shivani Govil, senior director of product management for developer products, believes that “AI can transform how engineering teams work, leading to more capacity to innovate, less toil, and higher developer satisfaction. Google’s approach is to bring AI to our users and meet them where they are—by bringing the technology into products and tools that developers use every day to support them in their work. Over time, we can create even tighter alignment between the code and business requirements, allowing faster feedback loops, improved product market fit, and better alignment to the business outcomes.”
In another example, a health care company used COBOL code assist to enable a junior developer with no experience in the programming language to generate an explanation file with 95% accuracy.
As Deloitte recently stated in a piece on engineering in the age of gen AI, the developer role is likely to shift from writing code to defining the architecture, reviewing code, and orchestrating functionality through contextualized prompt engineering. Tech leaders should anticipate human-in-the-loop code generation and review to be the standard over the next few years of AI adoption.
新趨勢:AI為IT注入動力
未來18到24個月,隨著企業對生成式AI的日益采用,IT職能的性質可能會發生巨大變化。根據德勤的前瞻分析,到2027年,即使是在最保守的情景下,生成式AI也將嵌入每家企業的數字產品或軟件體系中(如圖1所示)。以下是AI將在五大核心支柱中的具體影響。
1. 工程(Engineering)
在傳統的軟件開發生命周期中,手動測試、缺乏經驗的開發者以及分散的工具環境往往會導致效率低下。這些問題已在我們之前的《科技趨勢》中討論過。而現在,AI正在這些領域產生積極的影響。
AI助力的功能:
? 代碼生成
? 自動化測試
? 快速數據分析
這些能力幫助開發者節省時間,從而將更多精力投入到創新和功能開發中。據估計,僅代碼編寫效率的提升在美國的生產力收益就高達120億美元。
谷歌案例:
谷歌正在內部向開發人員推出AI工具。該公司CEO桑達爾·皮查伊在近期的財報電話會議中提到,大約25%的新代碼是通過AI開發的。
谷歌開發者產品高級總監Shivani Govil表示:
“AI可以徹底改變工程團隊的工作方式,提高創新能力、減少重復性勞動并提升開發者滿意度。谷歌的做法是將AI技術融入開發者每天使用的產品和工具中,以支持他們的工作。隨著時間的推移,我們可以實現代碼與業務需求之間更緊密的對齊,從而加速反饋循環、改善產品與市場的契合度,并更好地支持業務目標?!?/p>
AI提升的真實場景:
? 一家醫療公司通過AI支持的COBOL代碼助手,幫助一位沒有COBOL經驗的初級開發者生成了準確率高達**95%**的解釋文件。
開發者角色的轉變
德勤在最近一篇關于生成式AI時代工程開發的文章中指出,開發者的角色正從“編寫代碼”轉向“定義架構、審查代碼并通過上下文化的提示工程整合功能”。
技術領導者應預見到,人類參與的代碼生成與審查將在未來幾年成為AI應用的行業標準。
Technology executives surveyed by Deloitte last year noted that they struggle to hire workers with critical IT backgrounds in security, machine learning, and software architecture, and are forced to delay projects with financial backing due to a shortage of appropriately skilled talent. As AI becomes the newest skill in demand, many companies may not even be able to find all the talent they need, leading to a hiring gap wherein nearly 50% of AI-related positions cannot be filled.
As a result, tech leaders should focus on upskilling their own talent, another area where AI can help. Consider the potential benefits of:
? AI-powered skills gap analyses and recommendations
? Personalized learning paths
? Virtual tutors for on-demand learning
For example, Bayer, the life sciences company, has used generative AI to summarize procedural documents and generate rich media such as animations for e-learning. Similarly, AI could generate documentation to help a new developer understand legacy technology and then create an associated learning podcast and exam for the same developer.
At Google, developers thrive on hands-on experience and problem-solving, so leaders are keen to provide AI learning and tools (like coding assistants) that meet developers where they are on their learning journey. “We can use AI to enhance learning, in context with emerging technologies, in ways that anticipate and support the rapidly changing skills and knowledge required to adapt to them,” says Sara Ortloff, senior director of developer experience at Google.
As automation increases, tech talent would take an oversight role and enjoy more capacity to focus on innovation that can improve the bottom line. This could help attract talent since, according to Deloitte research, the biggest incentive that attracts tech talent to new opportunities is the work they would do in the role.
Cloud Financial Operations
Runaway spending became a common problem in the cloud era when resources could be provisioned with a click. Hyperscalers have offered data and tooling for finance teams and CIOs to keep better track of their team’s cloud usage, but many of these FinOps tools still require manual budgeting and offer limited visibility across disparate systems.
The power of AI enables organizations to be more informed, proactive, and effective with their financial management. For example:
? Real-time cost analysis
? Robust pattern detection
? Resource allocation across systems
AI can help enterprises identify more cost-saving opportunities through better predictions and tracking.
As AI demand increases in the coming years, enterprises are likely to see higher cloud costs. However, applying AI to FinOps can justify the investments in AI and optimize costs elsewhere.
Infrastructure
Across the broad scope of IT infrastructure—from toolchains to service management—organizations haven’t seen as much automation as they’d like. Just a few years ago, studies estimated that nearly half of large enterprises were handling key tasks like security, compliance, and service management manually.
The missing ingredient?
Automation that can learn, improve, and react to the changing demands of a business.
Now, this is becoming possible. Automated resource allocation, predictive maintenance, and anomaly detection could all be implemented in systems that natively understand their own real-time status and can take action accordingly.
This emerging view of IT is referred to as “autonomic” IT, inspired by the autonomic nervous system in the human body that adjusts dynamically to internal and external stimuli. In such a system, infrastructure takes care of itself, surfacing only issues that require human intervention.
For instance:
? eBay is already leveraging generative AI to scale infrastructure and analyze massive amounts of customer data, enabling impactful changes to its platform.
Cybersecurity
Although AI simplifies and enhances many IT processes, it also introduces greater complexity in cyber risks. As we discussed last year, generative AI and synthetic media open up new attack surfaces, including:
? Phishing
? Deepfakes
? Prompt injection attacks
As AI proliferates and digital agents become the newest B2B representatives, these risks may worsen.
How enterprises can respond:
? Data authentication: For example, SWEAR, a security company, has pioneered a way to verify digital media using blockchain.
? Data masking
? Incident response
? Automated policy generation
Generative AI can optimize cybersecurity responses and strengthen defenses against attacks.
Rethinking IT Resources
As technology teams adapt to these changes and challenges, many will shift their focus to innovation, agility, and growth enabled by AI. Teams can:
? Streamline IT workflows
? Reduce the need for manual intervention or offshoring
? Focus on higher-value activities
This could lead to a reallocation of IT resources across the board.
As Ian Cairns, CEO of Freeplay, notes:
“As with any major platform shift, the businesses that succeed will be the ones that can rethink and adapt how they work and build software for a new era.”
2. 人才(Talent)
根據德勤去年對技術高管的調查,許多企業在招聘具有關鍵IT背景(如安全、機器學習和軟件架構)的人才時面臨困難。由于缺乏具備適當技能的人才,它們不得不推遲一些已經獲得資金支持的項目。隨著AI成為最新的熱門技能,許多公司可能根本找不到所需的全部人才,導致招聘缺口進一步擴大,目前約有50%的AI相關崗位無法填補。
因此,技術領導者需要將重點放在提升現有團隊的技能上,而這恰好是AI可以發揮作用的領域之一??梢韵胂笠韵翧I支持的能力:
? AI驅動的技能差距分析與建議
? 個性化學習路徑
? 按需學習的虛擬導師
生命科學公司拜耳(Bayer)利用生成式AI總結程序文檔,并生成動畫等豐富的媒體用于電子學習。同樣,AI還可以生成文檔,幫助新開發者理解舊系統技術,并為其生成相關的學習播客和考試內容。
在谷歌,開發者依靠實際操作經驗和解決問題來成長,因此公司領導者特別注重提供AI學習資源和工具(如代碼助手),以滿足開發者當前學習階段的需求。谷歌開發者體驗高級總監Sara Ortloff表示:
“我們可以通過AI提升學習能力,將其與新興技術的上下文結合起來,預見并支持不斷變化的技能需求,幫助開發者適應這些變化?!?/p>
隨著自動化的增加,技術人才將更多承擔監督角色,同時有更多的時間專注于推動創新,為企業帶來切實的收益。這種變化還能吸引人才——根據德勤的研究,技術崗位吸引人才的最大因素是崗位本身的工作內容。
3. 云財務運營(Cloud Financial Operations)
在云計算時代,由于資源可以隨手點擊部署,過度支出已成為常見問題。雖然云服務商(Hyperscalers)已經為財務團隊和CIO提供了工具以更好地跟蹤云使用情況,但許多FinOps工具仍需要手動預算,且在跨系統之間的可見性方面存在限制。
AI的加入可以讓企業在財務管理上更加信息透明、主動出擊、高效管理。比如:
? 實時成本分析
? 強大的模式檢測
? 跨系統的資源分配
AI還能通過更好的預測和跟蹤,幫助企業發現更多節約成本的機會。
隨著未來幾年AI需求的持續增長,大型企業可能面臨云成本顯著上升的情況。然而,通過將AI應用于FinOps,不僅可以為AI投資正名,還能在其他領域優化成本。
4. 基礎設施(Infrastructure)
在廣泛的IT基礎設施領域——從工具鏈到服務管理,企業自動化程度仍遠低于預期。幾年前的研究表明,近一半的大型企業仍在手動處理安全、合規和服務管理等關鍵任務。
缺少的關鍵要素是什么?
能夠學習、改進并響應企業需求變化的自動化。
如今,這種能力正在成為現實。
比如:
? 自動化的資源分配
? 預測性維護
? 異常檢測
這些功能可以通過一個實時感知自身狀態并采取行動的系統實現。這種新興的IT概念被稱為**“自主IT”**,靈感來自人體的自主神經系統,它能動態調整心率和呼吸以適應內外部刺激。
4. 自主IT的優勢:
? 讓基礎設施自行運行,只在需要人工干預時提出問題。
? eBay已利用生成式AI擴展其基礎設施,并分析海量客戶數據,從而對其平臺進行重要改進。
5. 網絡安全(Cybersecurity)
雖然AI讓許多IT流程變得更加簡單高效,但也帶來了更高的網絡風險復雜性。正如我們去年提到的,生成式AI和合成媒體為網絡攻擊打開了新的入口,包括:
? 釣魚攻擊
? 深度偽造(Deepfakes)
? 提示注入攻擊
隨著AI的普及,以及數字代理成為最新的B2B代表,這些風險可能會更加嚴重。
企業應如何應對?
? 數據認證:例如,安全公司SWEAR通過區塊鏈驗證數字媒體的真實性。
? 數據掩碼
? 事件響應
? 自動化策略生成
生成式AI還可以優化網絡安全響應,加強對攻擊的防御能力。
重新思考IT資源分配
隨著技術團隊逐步適應上述變化和挑戰,許多團隊將把重點轉向由AI驅動的創新、敏捷性和增長。
團隊可以:
? 簡化IT工作流程
? 減少對手動干預或外包的依賴
? 專注于高價值活動
這可能會導致IT資源的全面重新分配。
正如Freeplay公司CEO Ian Cairns所說:
“與任何重大平臺轉變一樣,能夠重新思考和適應工作方式及軟件開發模式的企業,將在這一新紀元中勝出?!?/p>
The new math: Solving cryptography in an age of quantum
Quantum computers are likely to pose a severe threat to today’s encryption practices. Updating encryption has never been more urgent.
Cybersecurity professionals already have a lot on their minds. From run-of-the-mill social engineering hacks to emerging threats from AI-generated content, there’s no shortage of immediate concerns. But while focusing on the urgent, they could be overlooking an important threat vector: the potential risk that a cryptographically relevant quantum computer (CRQC) will someday be able to break much of the current public-key cryptography that businesses rely upon. Once that cryptography is broken, it will undermine the processes that establish online sessions, verify transactions, and assure user identity.
Let’s contrast this risk with the historical response to Y2K, where businesses saw a looming risk and addressed it over time, working backward from a specific time to avert a more significant impact.1 The potential risk of a CRQC is essentially the inverse case: The effect is expected to be even more sweeping, but the date at which such a cryptographically relevant quantum computer will become available is unknown. Preparing for CRQCs is generally acknowledged to be highly important but is often low on the urgency scale because of the unknown timescale. This has created a tendency for organizations to defer the activities necessary to prepare their cybersecurity posture for the arrival of quantum computers.
“Unless it’s here, people are saying, ‘Yeah, we’ll get to it, or the vendors will do it for me. I have too many things to do and too little budget,’” says Mike Redding, chief technology officer at cybersecurity company Quantropi.2 “Quantum may be the most important thing ever, but it doesn’t feel urgent to most people. They’re just kicking the can down the road.”
This complacent mindset could breed disaster because the question isn’t if quantum computers are coming—it’s when. Most experts consider the exact time horizon for the advent of a CRQC to be irrelevant when it comes to encryption. The consensus is that one will likely emerge in the next five to 10 years, but how long will it take organizations to update their infrastructures and third-party dependencies? Eight years? Ten years? Twelve?
Given how long it took to complete prior cryptographic upgrades, such as migrating from cryptographic hashing algorithms SHA1 to SHA2, it is prudent to start now.
In a recent report, the US Office of Management and Budget said, “It is likely that a CRQC will be able to break some forms of cryptography that are now commonly used throughout government and the private sector. A CRQC is not yet known to exist; however, steady advancements in the quantum computing field may yield a CRQC in the coming decade. Accordingly … federal agencies must bolster the defense of their existing information systems by migrating to the use of quantum-resistant public-key cryptographic systems.”3
The scale of the problem is potentially massive, but fortunately, tools and expertise exist today to help enterprises address it. Recently released postquantum cryptography (PQC) algorithm standards from the US National Institute of Standards and Technology (NIST) could help to neutralize the problem before it becomes costly,? and many other governments around the world are also working on this issue.?
Furthermore, a reinvigorated cyber mindset could set enterprises on the road to better security.
五、量子計算
量子計算機可能會對當前的加密實踐構成嚴重威脅,更新加密技術已經變得刻不容緩。
量子威脅的迫近
網絡安全專業人士已經有許多問題需要擔憂:從常見的社交工程攻擊到AI生成內容帶來的新威脅,問題層出不窮。然而,在應對這些緊迫問題的同時,他們可能忽略了一個重要的威脅:量子計算機對加密系統的潛在風險。一旦具備加密破解能力的量子計算機(CRQC)出現,可能會攻破目前廣泛依賴的公鑰加密技術。這將動搖互聯網連接會話的建立、交易驗證以及用戶身份驗證等核心過程。
相比之下,可以將這種風險與歷史上的千年蟲問題(Y2K)應對方式進行對比。Y2K是一個明確的風險,企業從特定的時間節點倒推,采取了系統性行動來避免更大的影響。而量子計算機的威脅卻正好相反:它的影響可能更為深遠,但具體會在何時成為現實卻無法預知。這種時間上的不確定性讓企業傾向于將其視為次要問題,并推遲為量子計算機的到來調整網絡安全防御的必要活動。
正如網絡安全公司Quantropi的首席技術官Mike Redding所說:
“除非量子計算機已經出現,人們會說,‘沒關系,我們以后再處理,或者供應商會幫我解決。我的事情已經夠多了,預算也有限?!?/p>
他補充道:“量子技術也許是有史以來最重要的事情,但對大多數人來說,它并不緊迫,他們只是把問題往后推?!?/p>
忽視的代價
這種松懈心態可能會導致災難性的后果,因為問題的關鍵并不是量子計算機是否會到來,而是何時到來。
專家共識:
雖然量子計算機的具體時間表尚不明確,但絕大多數專家認為,一個能夠威脅加密安全的量子計算機將在未來5到10年內出現。然而,企業需要多長時間才能完成對基礎設施和第三方依賴的全面升級?8年?10年?甚至12年?
回顧歷史,從哈希算法SHA1遷移到SHA2的時間就很漫長??紤]到這種遷移的復雜性,盡早行動是明智的選擇。正如美國管理和預算辦公室在一份報告中所指出的:
“很可能具備加密破解能力的量子計算機(CRQC)將能夠攻破目前政府和私營部門廣泛使用的一些加密形式。盡管目前尚不知是否存在這樣的計算機,但量子計算領域的穩步進展可能會在未來十年內帶來CRQC的誕生。因此,聯邦機構必須加強現有信息系統的防御,遷移到使用量子抗性公鑰加密系統。”
問題的規模與解決方案
量子計算機帶來的問題可能影響范圍極大,但幸運的是,現有的工具和專業知識已經為企業提供了解決方案:
1. 后量子密碼學(PQC)標準:
美國國家標準與技術研究院(NIST)最近發布了PQC算法標準,這些算法可以在問題變得昂貴之前化解風險。
2. 國際合作:
世界上許多國家的政府也在積極研究解決這一問題的辦法。
此外,量子威脅還為企業提供了一個重新思考網絡安全的機會,以構建更強大的安全體系。
Now: Cryptography everywhere
Two of the primary concerns for cybersecurity teams are technology integrity and operational disruption. Undermining digital signatures and cryptographic key exchanges that enable data encryption are at the heart of those fears. Losing the type of cryptography that can guarantee digital signatures are authentic and unaltered would likely deal a major blow to the integrity of communications and transactions. Additionally, losing the ability to transmit information securely could potentially upend most organizational processes.
Enterprises are starting to become aware of the risks posed by quantum computing to their cybersecurity. According to Deloitte’s Global Future of Cyber survey, 52% of organizations are currently assessing their exposure and developing quantum-related risk strategies. Another 30% say they are currently taking decisive action to implement solutions to these risks.
“The scale of this problem is sizeable, and its impact in the future is imminent. There may still be time when it hits us, but proactive measures now will help avoid a crisis later. That is the direction we need to take,” says Gomeet Pant, group vice president of security technologies for the India-based division of a large industrial products firm.
Cryptography is now so pervasive that many organizations may need help identifying all the places it appears. It’s in applications they own and manage, and in their partner and vendor systems. Understanding the full scope of the organizational risk that a CRQC would pose to cryptography (figure 1) requires action across a wide range of infrastructures, supply chains, and applications. Cryptography used for data confidentiality and digital signatures to maintain the integrity of emails, macros, electronic documents, and user authentication would all be threatened, undermining the integrity and authenticity of digital communications.
To make matters worse, enterprises’ data may already be at risk, even though there is no CRQC yet. There’s some indication that bad actors are engaging in what’s known as “harvest now, decrypt later” attacks—stealing encrypted data with the notion of unlocking it whenever more mature quantum computers arrive. Organizations’ data will likely continue to be under threat until they upgrade to quantum-resistant cryptographic systems.
“We identified the potential threat to customer data and the financial sector early on, which has driven our groundbreaking work toward quantum-readiness,” said Yassir Nawaz, director of the emerging technology security organization at JP Morgan. “Our initiative began with a comprehensive cryptography inventory and extends to developing PQC solutions that modernize our security through crypto-agile processes.”
Given the scale of the issues, upgrading to quantum-safe cryptography could take years, maybe even a decade or more, and we’re likely to see cryptographically relevant quantum computers sometime within that range. The potential threat posed by quantum to cryptography may feel over the horizon, but the time to start addressing it is now (figure 2).
“It is important that organizations start preparing now for the potential threat that quantum computing presents,” said Matt Scholl, computer security division chief at NIST. “The journey to transition to the new postquantum-encryption standards will be long and will require global collaboration along the way. NIST will continue to develop new post-quantum cryptography standards and work with industry and government to encourage their adoption.”
當前趨勢:無處不在的加密
網絡安全團隊當前面臨的兩大核心問題是技術完整性和運營中斷。
削弱數字簽名和支持數據加密的加密密鑰交換正是這些擔憂的根源。
如果失去了能夠保證數字簽名真實性和未被篡改的加密技術,通信和交易的完整性可能會遭受重大打擊。此外,失去安全傳輸信息的能力可能會顛覆大多數組織的運行流程。
企業對量子威脅的日益關注
企業已經開始意識到量子計算對網絡安全構成的風險。
根據德勤的《全球網絡未來調查》:
? 52%的企業正在評估自身的暴露程度,并制定與量子相關的風險策略。
? 另有30%的企業表示,正在采取果斷行動以實施應對這些風險的解決方案。
印度一家大型工業產品公司安全技術部門的副總裁Gomeet Pant表示:
“這個問題的規模相當龐大,其未來的影響迫在眉睫。或許我們還有時間應對,但現在采取主動措施可以避免未來的危機。這是我們需要前進的方向。”
識別加密系統的全局風險
加密技術如今如此普遍,以至于許多組織可能難以識別它存在的所有位置。
加密不僅用于它們自有的應用程序,還廣泛分布在合作伙伴和供應商系統中。
要全面理解具備加密相關性的量子計算機(CRQC)對加密技術可能造成的風險(見圖1),企業需要在以下領域采取行動:
? 基礎設施
? 供應鏈
? 應用程序
CRQC將威脅以下領域中使用的加密技術:
? 數據保密性
? 數字簽名的完整性
這包括電子郵件、宏指令、電子文檔和用戶認證的完整性和真實性。
這些威脅可能會破壞數字通信的完整性與可信度。
“先采集,后解密”的新風險
更糟糕的是,即使CRQC尚未出現,企業的數據可能已經面臨風險。
有跡象表明,惡意行為者正在進行所謂的“先采集,后解密”攻擊:
? 他們竊取加密數據,等量子計算機技術成熟后再進行解密。
因此,直到企業升級到量子抗性加密系統之前,其數據將持續處于威脅之下。
JP Morgan新興技術安全組織總監Yassir Nawaz表示:
“我們很早就識別到了客戶數據和金融行業可能面臨的潛在威脅,這推動了我們在量子準備方面的開創性工作。
我們的計劃從全面的加密技術盤點開始,并延伸到開發后量子密碼學(PQC)解決方案,通過靈活加密流程來現代化我們的安全防護?!?/p>
升級到量子安全加密的時間窗口
鑒于問題的規模,升級到量子安全加密可能需要數年,甚至十年以上。而根據專家預測,CRQC可能會在這段時間范圍內出現。
量子對加密的威脅似乎還很遙遠,但現在正是開始解決這個問題的最佳時機(見圖2)。
NIST計算機安全部門負責人Matt Scholl表示:
“組織必須從現在開始為量子計算可能帶來的威脅做好準備。
從當前加密系統過渡到新的后量子加密標準將是一個漫長的過程,需要全球范圍內的協作。
NIST將繼續開發新的后量子密碼學標準,并與行業和政府合作,推動這些標準的采用?!?/p>
The intelligent core: AI changes everything for core modernization
For years, core and enterprise resource planning systems have been the single source of truth for enterprises’ systems of records. AI is fundamentally challenging that model.
Many core systems providers have gone all in on artificial intelligence and are rebuilding their offerings and capabilities around an AI-first model. The integration of AI into core enterprise systems represents a significant shift in how businesses operate and leverage technology for competitive advantage.
It’s hard to overstate AI’s transformative impact on core systems. For years, the core and the enterprise resource planning tools that sit on top of it were most businesses’ systems of record—the single source of truth. If someone had a question about any aspect of operations, from suppliers to customers, the core had the answer.
AI is not simply augmenting this model; it’s fundamentally challenging it. AI tools have the ability to reach into core systems and learn about an enterprise’s operations, understand its process, replicate its business logic, and so much more. This means that users don’t necessarily have to go directly to core systems for answers to their operational questions, but rather can use whatever AI-infused tool they’re most familiar with. Thus, this transformation goes beyond automating routine tasks to fundamentally rethinking and redesigning processes to be more intelligent, efficient, and predictive. It has the potential to unleash new ways of doing business by arming workers with the power of AI along with information from across the enterprise.
No doubt, there will be integration and change management challenges along the way. IT teams will need to invest in the right technology and skills, and build robust data governance frameworks to protect sensitive data. The more AI is integrated with core systems, the more complicated architectures become, and this complexity will need to be managed. Furthermore, teams will need to address issues of trust to help ensure AI systems are handling critical core operations effectively and responsibly.
But tackling these challenges could lead to major gains. Eventually, we expect AI to progress beyond being the new system of record to become a series of agents that not only do analyses and make recommendations but also take action. The ultimate endpoint is autonomous decision-making, enabling enterprises to operate quickly compared with their current pace of operations.
Now: Businesses need more from systems of record
Core systems and, in particular, enterprise resource planning (ERP) platforms are increasingly seen as critical assets for the enterprise. There’s a clear recognition of the value that comes from having one system hold all the information that describes how the business operates. For this reason, the global ERP market is projected to grow at a rate of 11% from 2023 through 2030. This growth is driven by a desire for both greater efficiency and more data-driven decision-making.1
The challenge is that relatively few organizations are realizing the benefits they expect from these tools. Despite an acknowledgment that a centralized single source of truth is key to achieving greater operational efficiency, many ERP projects don’t deliver. According to Gartner research, by 2027, more than 70% of recently implemented ERP initiatives will fail to fully meet their original business case goals.2
Part of the reason ERP projects may fail to align with business goals is that the systems tend to be one-size-fits-all. Businesses needed to mirror their operations to fit the ERP system’s model. Applications across the organization were expected to integrate with the ERP. It was the system of record and held all business data and business logic, so the organization acquiesced to these demands, even if they were hard to meet. However, this produced a certain level of disconnect between the business and the ERP system.
AI is breaking this model. Some enterprises are looking to reduce their reliance on monolithic ERP implementations, and AI is likely to be the tool that allows them to by opening up data sets and enabling new ways of working.
六、智能核心:AI正在重塑核心現代化
多年來,核心系統和企業資源計劃(ERP)系統一直是企業記錄管理的“唯一真實來源”。但人工智能(AI)正在從根本上挑戰這一模式。
AI如何改變核心系統
許多核心系統供應商已經全面擁抱AI,并將其產品與功能圍繞“AI優先”模式進行重建。將AI整合到核心企業系統中,標志著企業運營和技術應用方式的重大轉變,為企業競爭優勢提供了全新的路徑。
多年來,企業依賴核心系統和其上的ERP工具作為記錄管理的基礎。如果對運營中的任何方面有疑問,無論是供應商還是客戶,答案都可以從核心系統中找到。
然而,AI的影響不僅僅是增強這一模式,而是從根本上挑戰它。AI工具能夠深入核心系統,學習企業的運營流程、理解其業務邏輯,甚至能夠復制這些流程。這意味著用戶不再需要直接訪問核心系統來獲取問題的答案,而是可以使用他們最熟悉的AI工具。
這種變革不僅局限于自動化常規任務,而是從根本上重新設計和優化流程,使其更加智能、高效和具有預測能力。AI結合整個企業的信息,能夠釋放全新的業務模式,為員工賦能。
集成與管理的挑戰
不可否認,在實現這一轉型的過程中會面臨集成和變更管理方面的挑戰。
? 技術與技能投資:IT團隊需要選擇合適的技術并提升團隊技能。
? 數據治理框架:建立健全的數據治理框架,保護敏感數據免受風險。
? 復雜架構管理:隨著AI深入核心系統,系統架構將變得更加復雜,這需要團隊有效應對。
? AI的信任問題:確保AI系統在處理關鍵核心操作時的高效性與責任性也同樣重要。
盡管如此,克服這些挑戰將帶來巨大的收益。未來,AI可能不僅僅是一個新的記錄系統,還會發展成為一系列智能代理,不僅能夠分析和提出建議,還可以直接采取行動。最終,這將實現自主決策,使企業的運營速度遠超當前水平。
現在:企業需要核心系統提供更多支持
核心系統,尤其是ERP平臺,被越來越多地視為企業的重要資產。企業普遍認識到,集中管理所有業務信息的系統是實現更高效率和數據驅動決策的關鍵。
正因為如此,全球ERP市場預計將以11%的年增長率從2023年持續增長至2030年。這種增長主要由企業對更高效率和數據驅動決策的需求推動。
為什么許多ERP項目未能滿足預期?
盡管企業已經意識到ERP系統的價值,但現實中,只有少數組織能夠真正從中獲益。根據Gartner的研究,到2027年,超過**70%**的新ERP實施項目將無法完全實現其原定的商業目標。
原因之一是ERP系統的“千篇一律”:
? 企業需要調整自己的業務流程以適應ERP系統的模型。
? 企業內部的應用程序需要與ERP進行整合。
由于ERP作為記錄系統,持有所有的業務數據和邏輯,企業被迫適應其要求,盡管這些要求可能難以滿足。這種模式導致了企業與ERP系統之間的脫節。
AI如何打破傳統模式
一些企業希望減少對單一ERP系統的依賴,而AI正是實現這一目標的關鍵工具:
? 開放數據集:AI使數據更加靈活和可用。
? 改變工作方式:AI提供了全新的、更智能的工作方式。
這不僅是技術的升級,更是企業運營模式的全面變革。
New: AI augments the core
With some evolution, ERP systems will likely maintain their current position as systems of record. In most large enterprises, they still hold virtually all the business data, and organizations that have spent the last several years implementing ERP systems will likely be reluctant to move on from them.
Orchestrating the platform approach
In this model, today’s core systems become a platform upon which AI innovations are built. However, this prospect raises multiple questions around AI orchestration that IT and business leaders will have to answer. Do they use the modules provided by vendors, use third-party tools, or, in the case of more tech-capable teams, develop their own models? Relying on vendors means waiting for functionality but may come with greater assurance of easy integration.
Another question is how much data to expose to AI. One of the benefits of generative AI is its ability to read and interpret data across different systems and file types. This is where opportunities for new learnings and automation come from, but it could also present privacy and security challenges. In the case of core systems, we’re talking about highly sensitive HR, finance, supplier, and customer information. Feeding this data into AI models without attention to governance could create new risks.
There’s also the question of who should own initiatives to bring AI to the core. This is a highly technical process that demands the skills of IT—but it also supports critical operational functions that the business should be able to put its fingerprints on.
The answer to these questions will likely look different from use case to use case and even enterprise to enterprise. But teams should think about them and develop clear answers before going all in on AI in the core. These answers form the foundation upon which rests the larger benefits of the technology.
“To get the most out of AI, companies should develop a clear strategy anchored in their business goals,” says Eric van Rossum, chief marketing officer for cloud ERP and industries at SAP. “AI shouldn’t be considered as a stand-alone functionality, but rather as an integral, embedded capability in all business processes to support a company’s digital transformation.”
AI enables new ways of working
Forward-looking enterprises are already answering these orchestration questions. Graybar, a wholesale distributor of electrical, industrial, and data communications solutions, is in the middle of a multiyear process of modernizing a 20-year-old core system implementation, which started with upgrades to its HR management tools and is now shifting to ERP modernization. It’s leaning on the best modules available from its core systems vendors when it makes sense, while also layering on third-party integrations and homegrown tools when there’s an opportunity to differentiate its products and services.?
The growth of AI presented leaders at the company with an opportunity to not only upgrade its tech stack, but also to think about how to reshape processes to drive new efficiencies and revenue growth. Trust has been a key part of the modernization efforts. The company is rolling out AI in narrowly tailored use cases where tools are applied to ensure reliability, security, and clear business value.
新的模式:AI助力核心系統升級
隨著不斷的演變,ERP系統很可能會繼續保持其作為企業記錄管理“系統真相”的核心地位。在大多數大型企業中,這些系統仍然承載著幾乎所有的業務數據,而那些花費數年時間實施ERP系統的企業,通常也不愿意輕易放棄它們。
打造平臺化協同模式
在這種新模式下,現有的核心系統將演變成一個平臺,成為AI創新的基礎。然而,這種前景也帶來了多個需要IT和業務領導者解答的問題:
? 是否依賴供應商的模塊?
? 是否使用第三方工具?
? 是否由技術能力強的團隊自行開發模型?
數據的開放程度也是一個需要關注的問題。生成式AI的優勢在于能夠跨不同系統和文件類型讀取和解讀數據,從而帶來新的洞察和自動化機會。但與此同時,這也可能帶來隱私和安全風險,尤其是在處理核心系統中高度敏感的HR、財務、供應商和客戶數據時。
在缺乏強有力治理的情況下,將這些數據輸入AI模型可能會引發新的風險。
另一個問題是:AI在核心系統中的落地應該由誰負責?
這不僅是一個高度技術化的過程,需要IT的專業技能,同時也涉及業務部門的關鍵運營職能,因此需要業務部門的深度參與。
答案可能因不同的用例和企業情況而異。但企業在全面擁抱核心系統中的AI之前,應該提前考慮這些問題,并制定清晰的解決方案。這些答案將構成AI技術進一步釋放價值的基礎。
SAP云ERP與行業首席營銷官Eric van Rossum表示:
“為了充分利用AI,企業應該制定一個以業務目標為核心的清晰戰略。AI不應該被看作獨立的功能,而是應該成為嵌入所有業務流程中的關鍵能力,從而支持企業的數字化轉型。”
AI推動全新工作模式
前瞻性的企業已經開始回答這些協同問題。
例如,Graybar(一家電氣、工業和數據通信解決方案的批發分銷商)正在進行一個為期多年的現代化升級項目,該項目涉及對已有20年的核心系統進行全面改造。
? 他們的現代化進程從HR管理工具的升級開始,目前已經轉向ERP系統的現代化升級。
? 在此過程中,Graybar在適用的情況下依賴核心系統供應商提供的最佳模塊,同時在有機會差異化其產品和服務時,引入第三方集成以及自主開發的工具。
AI的增長為公司領導層提供了一個不僅可以升級技術棧,還可以重新思考業務流程的機會,以推動新的效率提升和收入增長。
在這一現代化過程中,信任是關鍵因素之一。公司正在針對具體的、狹義的用例推出AI工具,確保這些工具在安全性和可靠性方面都符合要求。
-
AI
+關注
關注
87文章
30887瀏覽量
269066 -
量子計算
+關注
關注
4文章
1099瀏覽量
34948 -
智能硬件
+關注
關注
205文章
2347瀏覽量
107567
原文標題:Deloitte:2025年六大技術趨勢(深度解析)
文章出處:【微信號:深圳市賽姆烯金科技有限公司,微信公眾號:深圳市賽姆烯金科技有限公司】歡迎添加關注!文章轉載請注明出處。
發布評論請先 登錄
相關推薦
評論