AI runs on math. That math runs on hardware. And that hardware has a cost — in energy, in infrastructure, and in environmental impact that almost nobody in the industry wants to talk about.
Gina Rosenthal brings Tony Foster back for a second conversation, this time focused on what it actually takes to run an AI system at scale. Tony — Senior Principal Technical Marketing Engineer at Dell Technologies and adjunct professor at Kansas State University — walks through the architecture beneath modern AI: CPUs, GPUs, accelerators, and the evolution of chip design that made machine learning and deep learning possible in the first place.
The conversation traces a through line from the graphics processors built for video games to the neural network engines powering AI today — because it turns out that rendering a three-dimensional tree in a video game and training a large language model are, at the hardware level, versions of the same math problem.
But Tony and Gina don’t stop at the technical specs. They address the carbon footprint of the data centers running these systems, why newer hardware is not just faster but meaningfully more energy efficient, and what it means for organizations — and the planet — when AI infrastructure scales without accountability.
The intelligence is impressive. The infrastructure required to support it is a conversation we are overdue to have.
This auntie has more to say.
Find Angelia and her full body of work at eopmedia.com. If you’re ready to turn individual clarity into collective infrastructure, The Agency Collective is the next step. For Season 3 and beyond, check out The Tech Aunties on Spotify.