Daniel Rausch is Amazon’s Vice President of Alexa and Echo. He is in the middle of a major change. He’s been given the task of creating a new version powered by large language model, more than a decade after the launch of Amazon Alexa. In my interview with him he said that this new assistant dubbed Alexa+ is “a complete redesign of the architecture.”
So how did his team go about Amazon’s biggest ever revamp of its assistant? Of course, they used AI to build AI.
Rausch says, “The rate at which we use AI tooling throughout the build process is pretty astounding.” Amazon used AI at every stage of the build process to create the new Alexa. Yes, this includes generating code.
Alexa’s team has also incorporated generative AI in the testing process. The engineers used a “large language model” as a judge for answers during reinforcement learning processes, where the AI selected the best answers from two Alexa+ outputs.
Rausch says that AI tooling gives people the leverage to move faster and better. Amazon’s focus to use generative AI internally forms part of a larger disruption wave for software engineers in the workplace, as new tools like Anysphere Cursor change the way the job is done, as well as the workload expected.
The definition of an engineer will change radically if these AI-focused workflows are found to be hyper-efficient. Amazon CEO Andy Jassy said in a memo to employees that “we will need fewer workers doing some of the current jobs, and more workers doing other types of work.” It’s hard for us to predict the exact impact of AI on the corporate workforce, but we expect it to be reduced in the next few year as we gain efficiency from using AI across the company.
For the moment, Rausch is focused on rolling out a generative AI version to more Amazon users. He says, “We didn’t want customers to be left behind in any way.” “And that means you have to support hundreds of millions different devices.”
Alexa+ now chats with users in a more conversational way. It’s more personalized, remembering your preferences and completing online tasks you assign it. For example, searching for concert tickets or purchasing groceries.
Amazon launched Alexa+ in February at a company-wide event, and in March rolled out an early access program to a select group of public users, but without all the features announced. The company now claims that more than a million people are using the updated voice assistant. This is still only a small percentage. Eventually, hundreds of millions Alexa users will have access to this AI tool. Alexa+ could be released to a wider audience later this summer.
Amazon is facing competition from many directions as it develops a more dynamic assistant. OpenAI’s Advanced Voice Mode was launched in 2024 and found to be popular among users who enjoyed the AI voice. Apple announced a revamp of its native voice assistant Siri at the developer conference last year. It has many contextual and personalized features similar to those that Amazon is working on for Alexa+. Apple has not yet launched the rebuilt Siri in early access. The new voice assistant will be released sometime next year.
Amazon refused to give WIRED hands-on access to Alexa+ to test it out (voice-on?) The new assistant is not yet available on my Amazon account. WIRED will test Alexa+ in a similar way to how OpenAI launched its Advanced Voice Mode last year and provide readers with a context of their own as it becomes more widespread.

