Akamai and Anthropic: Why Edge AI Matters Now

Edge AI Has Arrived

If ever there was a story of Internet longevity, it is Akamai.

Thirty years after helping define the Content Delivery Network market, the company arguably could not be more relevant again if it tried.

Last week, Akamai reportedly signed its largest ever deal, a $1.8Bn agreement over seven years with Anthropic, the AI company behind Claude.

At first glance, many will see this as another large AI infrastructure announcement. In reality, it may signal something much bigger about where AI infrastructure is heading next.

Why This Deal Matters

The deal reportedly involves Nvidia RTX Pro 6000 hardware and infrastructure designed to support AI and cyber security workloads closer to the end user.

That principle is not new for Akamai.

For the last three decades, the company’s entire strategy has centred around proximity:

  • getting content closer to users
  • reducing latency
  • distributing workloads globally
  • improving resilience at the edge

Originally this was about web performance and CDN delivery. Later it evolved into DDoS mitigation, API protection, and broader cyber security services.

Now the same model is becoming highly relevant for AI.

From an infrastructure perspective, very few organisations in the world have the same level of global edge presence as Akamai.

AI Is Shifting From Training To Inference

One of the most important shifts happening in AI infrastructure is the move from model training to inference at scale.

Training large AI models still happens inside massive, centralised GPU clusters.

Inference is different.

Inference is the real-time serving of AI responses to users and applications, and increasingly that benefits from being distributed geographically and positioned closer to the user.

That creates several advantages:

  • lower latency
  • improved responsiveness
  • reduced bottlenecks
  • better resilience
  • potentially lower inference costs

This is where Edge AI becomes strategically important.

Why Anthropic May Be Diversifying

Another important angle is supplier diversification.

Today, much of the AI market remains heavily concentrated around:

  • AWS
  • Microsoft Azure
  • Google Cloud

Adding Akamai and Linode into the mix potentially gives Anthropic:

  • geographic flexibility
  • additional AI compute capacity
  • resilience against supply chain constraints
  • negotiating leverage
  • reduced dependency on a small number of hyperscalers

This is particularly relevant while GPU demand and AI infrastructure pressure remain extremely high globally.

The broader market insight here is important: AI providers increasingly need infrastructure diversity, not just raw compute power.

Why This Matters Beyond AI

From an ITogether perspective, this is about more than one commercial agreement.

It reflects a wider architectural shift:

  • applications moving closer to users
  • distributed workloads becoming more important
  • security and performance increasingly converging
  • edge infrastructure becoming strategically valuable again

This also reinforces why network architecture matters in AI conversations.

As AI applications become more real-time:

  • latency matters more
  • resilience matters more
  • geographic distribution matters more

That changes how organisations may eventually think about cloud architecture, WAN strategy, security inspection, edge security controls, and AI application delivery.

A Strategic Moment For Akamai

For Akamai itself, the deal is equally significant.

For years, many organisations primarily associated Akamai with:

  • CDN services
  • DDoS protection
  • web acceleration

This deal strengthens its positioning as something broader: an AI infrastructure platform with one of the world’s largest distributed edge footprints.

Investor reaction reflected that shift, with reports suggesting the market responded strongly following the announcement.

Reuters also recently reported a separate large-scale Google Cloud commitment potentially worth up to $200 billion over five years, highlighting just how aggressively AI infrastructure investment is accelerating globally.

The Bigger Picture

The most interesting part of this story is not the contract size, it is what the deal suggests about the future direction of AI infrastructure.

For years, centralisation dominated cloud strategy, AI may partially reverse that trend.

As inference workloads scale globally, the ability to process, secure, and deliver AI services closer to users could become one of the most important infrastructure advantages in the market. That makes Edge AI, distributed infrastructure, and low-latency networking far more than technical buzzwords. They may become foundational to how the next generation of AI services operates.

👉 Contact us to explore how Edge AI, distributed infrastructure, and modern network architecture could influence your organisation’s future strategy, we’d be happy to help.

📞 UK +44 (0) 113 341 0123

📞 NZ +64 (0)9 802 2444

📧 hello@itogether.com

0 Comments

Submit a Comment