This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that “terms of service” at this scale is just “we can pull the rug whenever.” Anthropic isn’t unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It’s a big part of why I’ve been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn’t be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.
This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that “terms of service” at this scale is just “we can pull the rug whenever.” Anthropic isn’t unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It’s a big part of why I’ve been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn’t be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.