
Legal and policy experts are increasingly warning that autonomous AI agents are racing ahead of the frameworks meant to govern them -- especially in India. As companies roll out agents across payments, banking, healthcare and supply chains, regulators are left without a dedicated legal regime for systems that can act autonomously and trigger other AI tools with little or no human oversight.
Existing laws covering contracts, liability, consumer protection and data governance are being pushed well beyond their original design. Particular unease surrounds agent to agent interactions and the question of who is responsible when automated systems fail. With oversight still grounded in high level principles and voluntary guidelines, momentum is building for risk based regulation and sandboxed experimentation.
That tension between capability, access and control shaped the AI agenda this week. Here are the key developments from across the industry:
OpenAI has rolled out a major update to Codex, pushing it beyond coding assistance and closer to a broader AI work partner for the more than 3 million developers who use it each week. Codex can now operate a user's computer in the background, with its own cursor, and work across everyday apps, generate images, remember preferences, and take on longer-running tasks over days or weeks.
The update also brings deeper developer tooling, including PR reviews, multi-file and terminal support, SSH access to remote development boxes, an in-app browser, and more than 90 new plugins spanning GitHub, Jira, CI tools and workplace apps. OpenAI says new safety layers, including sandboxing and experimental Guardian Approvals, are meant to balance greater autonomy with user control as Codex becomes more agentic.