In 1995, the popular TV science programme Tomorrow’s World tried to predict how the world would look in 2025. It got some things right, such as VR headsets, smart speakers and robot surgery. It got others wrong, such as asteroid mining, implanted microchips for banking and cyberspace riots.
One of the show’s predictions was that by 2055 people and machines would be “cognitively connected”. Right idea, wrong timeframe.
A similar prediction showed up this winter in the thousands of articles forecasting the near future of the tech industry. Gartner forecast widespread adoption of “neurological enhancement”, using computers to supercharge human intelligence via a bidirectional brain-machine interface (BMMI). By 2030, Gartner forecasts that “30% of knowledge workers will depend on these enhancements to increase their output, optimise their work, and stay competitive as AI continues its rise in the workplace”.
Tools you can trust
You don’t need an LLM to tell you that AI dominated the annual round of crystal ball gazing. Gartner’s top 10 also included software agents that are expected to make 15% of all work decisions by 2028.
Gartner also predicts the rapid emergence of AI governance tools to address the some of the issues around AI reliability and ethics. These would be responsible for ensuring that the actions of AI models were fair, transparent and accountable. Enterprises that use them would – Gartner theorises – enjoy 30% higher trust ratings and 25% higher regulatory compliance than their peers.
Andreessen Horowitz had a slightly different take on AI and governance. Instead of policing itself, AI could be applied to the labour-intensive and repetitive business of regulatory compliance in financial services, manufacturing and elsewhere.
Hardware is eating the world
The resurgence of interest in machine intelligence has put robots firmly back on the agenda. According to Andreessen Horowitz general partner Erin Price-Wright, rapid developments in robotics could drive a shift away from software and towards hardware for the next generation of engineers. “The robots are coming — someone will have to build, train, and service them,” she wrote.

Another hardware-related consequence of AI is the race to build GPU farms – the specially equipped data centres to process all that AI traffic. Andreessen Horowitz general partner Ajney Midha argues that any state that wants “a seat at the table” in AI will need to operate its own “hypercenter”, his term for a massive data centre with the energy supply and heat dissipation capability to run thousands of power-hungry GPUs. According to Midha, a hypercenter would need to run at 3-6 gigawatts, consuming roughly 40 times the power of today’s facilities.
It seems certain that AI will put pressure on existing networking and compute infrastructure but software design will ultimately determine how big a problem that becomes. Andreessen Horowitz partner Guido Appenzeller sees “AI coming to every application and every device. It’s no longer just running on large servers in the cloud but on small devices as well.” Locally run applications on phones, laptops and appliances could satisfy some of the demand for AI functionality without increasing network traffic, he argues.

Ransomware – this time it’s personal
AI is heavily implicated in the future of cybersecurity too. The technology is expected to make it easier than ever for hacker groups and nation state actors to engage in social engineering. The ability to generate authentic voices and deepfake video will increase the effectiveness of hacking from phishing to ransomware. 2024 saw the term “ransomware as a service” coined for the hacker groups enabling the wider hacker community. Enterprises can expect more of the same in 2025 as this branch of criminal activity continues to adopt the trappings of a modern industry.
Some commentators, including Aryaka’s VP of Security Engineering and AI Strategy Aditya Sood, predict that AI will arm hackers for a new battlefront, where individuals rather than organisations are targeted. “Attackers could use AI to craft customised ransomware attacks as more personal data is harvested online through social media, leaked databases and compromised devices. What makes this prospect particularly alarming is the potential for AI automation, which would make these attacks both scalable and efficient.”

Two trends are likely to contribute to the rise of “personal ransomware”. One is that attackers will move on to new targets as large organisations become better at detecting and preventing ransomware attacks. The second trend is a product of the first. According to Matthias Held of crowdsourced security provider Bugcrowd: “Ransomware attacks are moving away from simple data encryption towards a more lucrative model focused on data exfiltration and extortion.” This shift coupled with the scalability mentioned by Sood could AI a potent tool for digital blackmailers.

Future present
The futuristic technology always captures most of the attention, but many of the solutions to current and future problems are already here. Most cyber attacks succeed because infrastructure is insecure. Wherever there are networks there are millions of open doors.
We predict that in 2025 many enterprises will continue to ignore zero trust network architecture and fail to take sensible precautions to prevent lateral movement when networks are breached.
We also predict that network performance (always the poor cousin of security) will be a hot topic in 2025 and beyond. Providing secure connectivity that is also reliable and fast is a challenge for enterprises today. AI will have a multiplier effect on performance bottlenecks. To paraphrase Andreessen Horowitz’s most famous prediction (“software eats world”), 2025 could be the year AI eats the network.
By Mark Fox, CEO, Zonic Group.