Artificial intelligence is undergoing a fundamental transition phase. It is no longer about simple incremental improvements but about a real redefinition of how we interact with intelligent systems. During the recent “Big Ideas for 2026” seminar by Andreessen Horowitz, the fund’s partners outlined how AI agents are shifting from reactive tools to full-fledged digital employees capable of operating autonomously.
The disappearance of the text interface as a starting point
The first radical change involves eliminating the input box as the central element of AI applications. According to projections from the a16z team, by 2026 users will no longer need to formulate complex prompts to get results. Next-generation applications will silently observe our behavior, proactively intervening with pre-processed action proposals, requiring only final approval.
Behind this change lies an unprecedented market opportunity. While traditional software operates within a $300-400 billion annual market, AI agents are opening access to a $13 trillion expenditure on the US workforce—a market expansion of about 30 times. This means AI is no longer competing for a slice of IT spending but for the entire HR budget.
The evolution follows a simple logic: the best employees do not wait for orders, they autonomously identify problems, diagnose causes, propose solutions, and seek approval only at the end of the process. AI agents will need to replicate exactly this behavior. In CRM systems, for example, an intelligent agent will not wait for the salesperson to find contacts but will autonomously scan active opportunities, old email conversations, and suggest the most promising leads to remember.
Software design is no longer for human eyes
The second revolution involves the concept of “agent-first” design, which completely overturns the principles on which software creation has been based for decades. Until now, every interface was optimized for human attention: crucial information in the first paragraphs, well-organized visual details, intuitive flows for mouse clicks. Everything was designed to capture attention and facilitate manual navigation.
With the growth of AI agents as intermediaries, this paradigm becomes obsolete. Agents do not need visually appealing interfaces; they seek something radically different: machine legibility, that is, structural clarity that allows systems to process information accurately and quickly. An AI agent will read the entire text of an article, while a human reads only the first paragraphs. Optimization is no longer about graphic design but about the underlying structure of information.
This has widespread consequences: data providers, content creators, and software developers will all need to redesign their outputs for machines, not for people. Today, we already see engineers consulting AI reports that analyze telemetry data and synthesize insights directly on Slack, rather than manually accessing monitoring dashboards. Sales teams receive data prepared by agents that have already processed CRMs, instead of navigating platforms themselves.
A concerning consequence of this shift could be the emergence of ultra-personalized and massively produced content, specifically generated to satisfy agents’ scanning algorithms—a sort of “AI era keyword stuffing.” It will no longer be about creating relevant and insightful articles but about producing large volumes of low-quality content optimized for what agents are thought to “want to see.”
Voice agents: from science fiction to industrial practice
The third transformation is represented by the rise of voice agents, which in 2025 made a decisive qualitative leap. They are no longer lab experiments but real systems that companies purchase and deploy at scale with surprising speed.
In the healthcare sector, voice agents are finding widespread use: from calls to patients for post-operative follow-ups, appointment reminders, to initial psychiatric interviews. Adoption is driven by tangible pressure—the turnover rate and hiring difficulties in the medical sector make these solutions not a choice but an operational necessity.
In the financial and banking sector, the scenario is even more interesting. One might think that strict compliance and regulation would prevent voice technology from operating, yet the opposite happens. Voice agents comply with regulations with 100% accuracy, while humans—no matter how well-trained—commit violations. Moreover, every voice interaction leaves verifiable digital traces, making performance and compliance monitoring completely transparent.
In recruiting, voice agents allow candidates to conduct preliminary interviews at any time, 24/7, significantly shortening the selection process for entry-level and technical roles.
Implications for the labor and services markets
The consolidation of these three trends will have profound effects on call centers and BPO services. Many of these sectors will face a gradual transition, others a sharper decline, depending on how they adapt their business models. The principle is simple: “AI won’t take your job, but a person using AI will.” Providers that can integrate voice technology to offer competitive prices or greater capacity will have a huge advantage.
In the short to medium term, some clients will continue to rely on traditional outsourcing services but will choose providers that have adopted AI systems to reduce costs or increase volume. As baseline models continue to improve and costs decrease, call centers in many regions worldwide will face even greater pressures.
A promising but still underdeveloped area is the application of voice agents in government services. Platforms already handling non-emergency calls to 911 could easily expand to communications with the DMV and other public services, reducing frustration for both citizens and public employees.
In the consumer segment, growth has so far been slow (most use cases are B2B, where ROI is obvious), but an interesting category is emerging: voice companions in care facilities and nursing homes, used both as companions for residents and as tools for health monitoring over time.
An industrial perspective on AI voice
It is crucial to consider voice agents not as an isolated market but as an entire industry with opportunities at every level of the supply chain: from foundational models to development platforms, up to end-user applications. This means the ecosystem will have space for numerous players, each with their specific role.
Recent advances in foundational models have led to dramatic improvements in accuracy and latency. In some cases, companies are even deliberately slowing response times or introducing slight background noise to make interactions more “human” and less unnatural.
The ability of voice agents to handle multilingual conversations with strong accents offers an additional competitive advantage, enabling global companies to operate without the geographical constraints that characterize human labor.
These three transformations—the elimination of the text interface, the agent-first design, and the industrial rise of voice systems—are not mere predictions but visible signs of a structural change already underway. Those who understand and anticipate these shifts will have the opportunity to redefine entire sectors.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Three crucial transformations that will reshape the AI industry in 2026: from text interfaces to autonomous agents, and up to the voice revolution
Artificial intelligence is undergoing a fundamental transition phase. It is no longer about simple incremental improvements but about a real redefinition of how we interact with intelligent systems. During the recent “Big Ideas for 2026” seminar by Andreessen Horowitz, the fund’s partners outlined how AI agents are shifting from reactive tools to full-fledged digital employees capable of operating autonomously.
The disappearance of the text interface as a starting point
The first radical change involves eliminating the input box as the central element of AI applications. According to projections from the a16z team, by 2026 users will no longer need to formulate complex prompts to get results. Next-generation applications will silently observe our behavior, proactively intervening with pre-processed action proposals, requiring only final approval.
Behind this change lies an unprecedented market opportunity. While traditional software operates within a $300-400 billion annual market, AI agents are opening access to a $13 trillion expenditure on the US workforce—a market expansion of about 30 times. This means AI is no longer competing for a slice of IT spending but for the entire HR budget.
The evolution follows a simple logic: the best employees do not wait for orders, they autonomously identify problems, diagnose causes, propose solutions, and seek approval only at the end of the process. AI agents will need to replicate exactly this behavior. In CRM systems, for example, an intelligent agent will not wait for the salesperson to find contacts but will autonomously scan active opportunities, old email conversations, and suggest the most promising leads to remember.
Software design is no longer for human eyes
The second revolution involves the concept of “agent-first” design, which completely overturns the principles on which software creation has been based for decades. Until now, every interface was optimized for human attention: crucial information in the first paragraphs, well-organized visual details, intuitive flows for mouse clicks. Everything was designed to capture attention and facilitate manual navigation.
With the growth of AI agents as intermediaries, this paradigm becomes obsolete. Agents do not need visually appealing interfaces; they seek something radically different: machine legibility, that is, structural clarity that allows systems to process information accurately and quickly. An AI agent will read the entire text of an article, while a human reads only the first paragraphs. Optimization is no longer about graphic design but about the underlying structure of information.
This has widespread consequences: data providers, content creators, and software developers will all need to redesign their outputs for machines, not for people. Today, we already see engineers consulting AI reports that analyze telemetry data and synthesize insights directly on Slack, rather than manually accessing monitoring dashboards. Sales teams receive data prepared by agents that have already processed CRMs, instead of navigating platforms themselves.
A concerning consequence of this shift could be the emergence of ultra-personalized and massively produced content, specifically generated to satisfy agents’ scanning algorithms—a sort of “AI era keyword stuffing.” It will no longer be about creating relevant and insightful articles but about producing large volumes of low-quality content optimized for what agents are thought to “want to see.”
Voice agents: from science fiction to industrial practice
The third transformation is represented by the rise of voice agents, which in 2025 made a decisive qualitative leap. They are no longer lab experiments but real systems that companies purchase and deploy at scale with surprising speed.
In the healthcare sector, voice agents are finding widespread use: from calls to patients for post-operative follow-ups, appointment reminders, to initial psychiatric interviews. Adoption is driven by tangible pressure—the turnover rate and hiring difficulties in the medical sector make these solutions not a choice but an operational necessity.
In the financial and banking sector, the scenario is even more interesting. One might think that strict compliance and regulation would prevent voice technology from operating, yet the opposite happens. Voice agents comply with regulations with 100% accuracy, while humans—no matter how well-trained—commit violations. Moreover, every voice interaction leaves verifiable digital traces, making performance and compliance monitoring completely transparent.
In recruiting, voice agents allow candidates to conduct preliminary interviews at any time, 24/7, significantly shortening the selection process for entry-level and technical roles.
Implications for the labor and services markets
The consolidation of these three trends will have profound effects on call centers and BPO services. Many of these sectors will face a gradual transition, others a sharper decline, depending on how they adapt their business models. The principle is simple: “AI won’t take your job, but a person using AI will.” Providers that can integrate voice technology to offer competitive prices or greater capacity will have a huge advantage.
In the short to medium term, some clients will continue to rely on traditional outsourcing services but will choose providers that have adopted AI systems to reduce costs or increase volume. As baseline models continue to improve and costs decrease, call centers in many regions worldwide will face even greater pressures.
A promising but still underdeveloped area is the application of voice agents in government services. Platforms already handling non-emergency calls to 911 could easily expand to communications with the DMV and other public services, reducing frustration for both citizens and public employees.
In the consumer segment, growth has so far been slow (most use cases are B2B, where ROI is obvious), but an interesting category is emerging: voice companions in care facilities and nursing homes, used both as companions for residents and as tools for health monitoring over time.
An industrial perspective on AI voice
It is crucial to consider voice agents not as an isolated market but as an entire industry with opportunities at every level of the supply chain: from foundational models to development platforms, up to end-user applications. This means the ecosystem will have space for numerous players, each with their specific role.
Recent advances in foundational models have led to dramatic improvements in accuracy and latency. In some cases, companies are even deliberately slowing response times or introducing slight background noise to make interactions more “human” and less unnatural.
The ability of voice agents to handle multilingual conversations with strong accents offers an additional competitive advantage, enabling global companies to operate without the geographical constraints that characterize human labor.
These three transformations—the elimination of the text interface, the agent-first design, and the industrial rise of voice systems—are not mere predictions but visible signs of a structural change already underway. Those who understand and anticipate these shifts will have the opportunity to redefine entire sectors.