“This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse,” that the new name reflects a change in regulatory ambition. The Brookings Institution has defined AI safety “research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm,” as having seen better days.
With Meta’s dissolution of its Responsible AI Team in late 2023 and Apple and Meta’s refusal last year to sign the EU AI Pact, and the Trump administration ripping off Biden-era AI Safety Rulesthere appears to be a less appetite for prevention regulation, like what the US Food and Drug Administration attempts to do with food supply, and more interest in proscriptive regulations – enjoy your racist, biased AI The UK government championed unfettered speech in a manner that was not evident in the reported stance it took on encryption.
To put it more bluntly, UK is determined to not regulate the country in order to prevent the economic benefits of AI investment as well as the associated labor consequences of AI jobs and AI replacement.
… helping us unleash AI and grow our economy…
Peter Kyle, the Secretary of State for Science, Innovation, and Technology, stated as much in a press release: “The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change.” This plan is the Labour government’s blueprint of priorities.
Anthropic is a key partner in this plan, having distinguished itself from OpenAI as the leader of commercial AI firms. Built by ex-OpenAI staff and others, it identifies itself as “a safety-first company,” though whether that matters much anymore remains to be seen. Anthropic and UK’s Department for Science, Innovation and Technology have signed a Memorandum Of Understanding to develop AI tools that will be integrated into UK Government services for citizens. Dario Amodei is the CEO and co-founder at Anthropic. He made a statement. IBM’s return-to office order hits finance and operations teams amid push to ditch staff for AI.
The disclaimer dialogue window also comes with a you’re-to-blame-if-you-use-this checkbox, “I agree to the MyCity Chatbot’s beta limitations.” Problem solved.
Anthropic seems to be more confident about its technology, citing several government agencies who have already embraced its Claude family LLMs. The San Francisco startup notes that Washington, DC Department of Health partnered with Accenture in order to build a Claude based bilingual chatbot. This was done to make their services more accessible, and to provide health info on demand. The European Parliament uses Claude to search and analyze documents – without the regretsthat are evident among those who use AI for legal support.
In England, Swindon Borough Council offers a Claude-based tool called “Simply Readable,” hosted on Amazon Bedrock, that makes documents more accessible for people with disabilities by reformatting them with larger font, increased spacing, and additional images. It’s claimed that the result was significant financial savings. Simply Readable converts documents from 5-10 pages for only 7-10 pence. This allows funds to be used for other social services.
The UK’s Local Government Association, or LGA, claims that the tool has returned 749.900 percent on investment. The LGA announced earlier this month that the tool had delivered a 749,900 percent return on investment.
There are no details on whether these AI savings were a cost to jobs or a cost for Jobseeker’s allowance. Anthropic in Time may have an idea. The UK government’s deal uses the AI firm’s recently-announced a target=”_blank” href=”https://www.theregister.com/2025/02/11/ai_impact_hits_midtohigh_wage_jobs/”>Economic Indexwhich uses anonymous Claude conversations to estimate AI’s impact on labor market. (r)