Author: admin
-
Air AI Banned by FTC: What Call Centers Using AI Outbound Calling Should Do Next
On March 24, 2026, the Federal Trade Commission (FTC) announced a settlement with Air AI Technologies and its owners, concluding a lawsuit filed in August 2025. Under the terms of the proposed order, Air AI and its operators are banned from selling or marketing any business opportunity and from making false or unsubstantiated claims while…
-
The Silent Revenue Kill: “Spam Likely” Is a Pipeline Problem, Not a Dialer Problem
You spent money on leads. Your team is dialing. The system shows thousands of call attempts going out. But conversations aren’t happening. Qualified prospects aren’t moving forward. Revenue is flat. Most sales leaders in this situation blame the script. Or the leads. Or the reps. Almost none of them check the one thing that may…
-
Inside our approach to the Model Spec
Learn how OpenAI’s Model Spec serves as a public framework for model behavior, balancing safety, user freedom, and accountability as AI systems advance.
-
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.
-
Helping developers build safer AI experiences for teens
OpenAI releases prompt-based teen safety policies for developers using gpt-oss-safeguard, helping moderate age-specific risks in AI systems.
-
Update on the OpenAI Foundation
The OpenAI Foundation announces plans to invest at least $1 billion in curing diseases, economic opportunity, AI resilience, and community programs.
-
Powering product discovery in ChatGPT
ChatGPT introduces richer, visually immersive shopping powered by the Agentic Commerce Protocol, enabling product discovery, side-by-side comparisons, and merchant integration.
-
Creating with Sora Safely
To address the novel safety challenges posed by a state-of-the-art video model as well as a new social creation platform, we’ve built Sora 2 and the Sora app with safety at the foundation. Our approach is anchored in concrete protections.
-
How we monitor internal coding agents for misalignment
How OpenAI uses chain-of-thought monitoring to study misalignment in internal coding agents—analyzing real-world deployments to detect risks and strengthen AI safety safeguards.
-
OpenAI to acquire Astral
Accelerates Codex growth to power the next generation of Python developer tools