FinOps for AI
5 sections

Foundational

What Are We Even Paying For?

The Fundamentals of AI Cost Models

90 min5 sections10 quiz questions
Exam topics:AI Cost ModelsToken Billing MechanicsDeployment Infrastructure
NovaSpark
It's Monday morning, day three at NovaSpark. You've barely found the good coffee machine when Priya — VP Engineering, the person who hired you — drops a laptop on your desk and pulls up an AWS Billing console. The number at the top is $847,000. That's not the annual budget. That's last month. "AI spend," she says. "Up 340% from two months ago. Finance is asking questions. The board meeting is Friday." She slides a printed spreadsheet across the desk — four teams, four cost centers, all contributing to a single consolidated bill that looks like it was generated by three different companies using three different currencies. "I need to understand what we're paying for," Priya says. "Not the total. The mechanics. Where does this money actually go?" You look at the spreadsheet. Three lines catch your eye immediately. Line 1: "OpenAI API — gpt-4o — $214,440" Line 2: "AWS Bedrock — Llama 3.1 70B — $89,200" Line 3: "EC2 p4d.24xlarge (4× reserved) — $156,800" Same goal — run AI workloads — three completely different billing structures. This is where you start.
🛡️

Complete the Knowledge Check to earn

Token Tracker

150 pts