Home Industries Finance and Banking Former Anthropic executive raises $15M for AI agents to help startups deploy...

Former Anthropic executive raises $15M for AI agents to help startups deploy safely

0
Former Anthropic executive raises $15M for AI agents to help startups deploy safely

A new startup founded by an Former Anthropic executive raised $15 million in order to solve the biggest challenge facing businesses today: how to deploy AI systems without risking catastrophic failures which could damage their business.

Artificial Intelligence Underwriting Companyis a new company that launches today. It combines insurance coverage, rigorous safety standards, and independent audits, to give companies confidence when deploying AI agents – autonomous software systems capable of performing complex tasks such as customer service, data analysis, and coding.

Seed funding was led by Nat Friedman– Former GitHub CEO through his firm NFDG– with participation by Emergence Capital Terrainand several notable angel Investors including Ben manis the co-founder of Anthropic and former chief information officers at Google Cloud, MongoDB, and MongoDB.

‘Enterprises are on a tightrope,’ said In an interview, Rune Kvist, AIUC’s CEO and co-founder, was quoted as saying: “On one hand, you could sit back and watch your competitors make your business irrelevant or you could lean in and risk being in the news for having your chatbot spew Nazi rhetoric, or hallucinating about your refund policy, and discriminating against people you’re trying recruit.


AI Impact Series Returns To San Francisco – 5 August

Are you ready for the next phase of AI? Join leaders from Block GSK and SAP to get an exclusive look at the ways autonomous agents are reshaping workflows in enterprise – from end-to-end automated to real-time decision making.

Reserve your seat now as space is limited. https://bit.ly/3GuuPLF


The company’s approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. AI systems are now capable of performing tasks that rival undergraduate-level reasoning. However, many companies remain reluctant to deploy them because they are concerned about unpredictable failures and liability issues.

Creating security standards at AI speed

AIUC’s solution focuses on creating what Kvist refers to as “SOC 2” for AI agents — a comprehensive risk and security framework specifically designed for AI systems. SOC 2has become the standard for cybersecurity that enterprises require vendors to meet before sharing sensitive information.

SOC 2 is the standard for cybersecurity. It specifies all of the best practices that you must adopt, in enough detail that a third-party can come and verify whether a company complies with those requirements,” Kvist explained. “But it does not mention AI. There are a lot of new questions, like: How are you handling my data? What about hallucinations? What about these tool-calls?”

The AIUC-1 standard covers six categories: safety and security, reliability, accountability data privacy, societal risks, and reliability. The framework requires AI firms to implement specific safeguards – from monitoring systems to incident reaction plans – that can be independently confirmed through rigorous testing.

We test these agents extensively using customer service as an example, since it’s easy to relate. We ask the system to say racist things, give me a refund that I don’t deserve or give me more than I deserve. We also ask it to say outrageous things, leak data, or say something outrageous. Kvist said that they do this thousands of time to get an accurate picture of the AI agent’s capabilities.

From Benjamin Franklin’s Fire Insurance to AI Risk Management

This insurance-centered approach draws upon centuries of precedent, where private markets moved quicker than regulation to allow the safe adoption transformative technologies. Kvist often refers to Benjamin Franklin’s founding of America’s first insurance company in 1752. This led to the creation of building codes and fire inspections, which helped curb the fires that were ravaging Philadelphia during its rapid growth.

Kvist explains that insurance has always been the best model because insurers are motivated to tell the truth. If they say that the risks are greater than they actually are, someone will sell cheaper insurance. If they claim the risks are lower than they are, then they will have to pay for the bill and close their business. Insurance Institute of Highway Safety (IIHS) developed crash test standards that encouraged safety features such as airbags and seatbelts – years before government regulations mandated them.

Major AI companies are already using the new insurance models

Aiuc is already working with high-profile AI companies in order to validate its approach. The company has certified AI agents to unicorn startups Ada is a customer support agent. Cognition (coding) helped unlock enterprise deals which had been stalled because of trust concerns.

Kvist explained that “Ada helped them unlock a contract with the top social media company, where we ran independent tests to determine the risks that were important to this company, and that helped unlock the deal, giving them confidence that they could show this information to their customers.”

In addition to establishing partnerships with established insurance companies, the startup is also working on developing its own products. Lloyd’s of London, the oldest insurance market in the world, will provide the financial backing to policies. This is a way to address a major concern about trusting an unproven startup with a large liability coverage.

Kvist explained that the insurance policies will be backed by the balance sheet of the major insurers. “For example, when working with Lloyd’s of London – the world’s oldest insurance company – they’ve never missed a payment and the insurance policy comes from them.”

Quarterly updates instead of years-long regulatory cycle

AIUC’s innovation is to design standards that can keep up with AI’s rapid development speed. While traditional regulatory frameworks such as the The me a act takes years to develop and implement. Aiuc will update its standards every three months.

Kvist noted that the EU AI Act, which was first proposed in 2021, is now being paused because it is too burdensome four years later. “This cycle makes it difficult to get the legacy regulation process to keep pace with this technology.” Kvist noted that “a year and a quarter ago, everyone would say that we were two years ahead, but now that sounds like eight-months, or something like that.”

AIUC’s insurance policies include coverage for a wide range of AI failures – from data breaches to discriminatory hiring practices, to intellectual property infringements and incorrect automated decisions. The company bases its pricing on extensive testing, which involves breaking AI systems thousands of time in different failure modes.

We think you’ll find it interesting for some of the other items. Or, you can wait until a lawsuit is filed. Kvist explained that if, for example, you make an incorrect refund you will pay the price in the amount you wrongly refunded.

This startup works with a group of partners, including PwCis one of the “Big Four” accounting companies. Orrick is a leading AI law firm, and academics from Stanford With develops and validates its standards.

Former Anthropic executive left to solve AI trust issue

Founders bring deep experience in both AI development as well as institutional risk management. Kvist was hired as the first product and marketer at Anthropic early in 2022, just before ChatGPT launched. He also sits on the board. Center for AI Safety ( ) Co-founder Brandon Wang was a Thiel Fellow and had previously built consumer underwriting companies. Rajiv Dattani (19459040) is a former McKinsey Partner who led global insurance and served as COO at METR, an organization that evaluates the best AI models.

Kvist stated that his decision to leave Anthropic was based on the question: “How, as a community, will we deal with this technology which is washing over us?” “I think that building AI, as Anthropic does, is exciting and will do much good for the world. But the question that wakes me up every morning is, “How, as a community, will we deal with this?”

The race to make AI safer before regulation catches on

AIUC’s launch signals a wider shift in how AI industry approaches risk as the technology moves away from experimental deployments and into mission-critical business applications. The insurance model provides enterprises with a way to avoid the extremes of reckless AI deployment and paralyzed inaction as they wait for comprehensive government oversight.

As AI agents become more powerful and widespread, the startup’s approach may prove vital. Companies like ‘Antares’ create financial incentives to encourage responsible development and enable faster deployment. Aiuc is building the infrastructure to determine whether artificial intelligence transforms our economy in a safe or chaotic way.

Twig Said, “We hope that this insurance model and this market-based system will both encourage fast adoption and investment into security.” We’ve seen it throughout history: the market can move quicker than legislation on these matters.

There are no higher stakes. As AI systems move closer to human-level thinking in more domains, it is possible that the window for developing robust safety infrastructure will be closing rapidly. AIUC’s wager is that the market will already have built the guardrails by the time regulators catch-up to AI’s breakneck speed.

Afterall, Philadelphia’s fires did not wait for government building code — and today’s AI race will not wait for Washington either.

Daily insights into business use cases from VB Daily

Impress your boss with VB Daily. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.

Read our privacy policy

Thank you for subscribing. Click here to view more VB Newsletters.

An error occured.


www.aiobserver.co

Exit mobile version