I consider myself a skeptic in regards to generative artificial (AI) and I’m not alone.
The Internet is awash with debates about AI. From pro-AI fans hyping how it has transformed their lives, to people who refuse to use it for very valid reasons, such as Data privacy Environmental impacts
Although I tend to agree with the skeptics more, I find it fascinating and surprising how many people have told me (both online and offline) that they use AI regularly. My job involves trying out new technologies, and AI is one of them. It doesn’t work for me almost every time, which makes me a skeptic. According to Seth Juarez VP of Product, AI Platform, at Microsoft, skepticism could be the key to a successful AI implementation.
Juarez said, “I am an AI skeptic,” when I sat with him for a chat at Microsoft’s developer conference 2025 Build. “And that’s why I can actually make them do what I want to do.”
Juarez explained that it comes down understanding how large languages models (LLMs), and by understanding this, people can learn how to use them correctly. How do these things work, then? Juarez described LLMs “a machine that cranks language.”
Seth Juarez is the VP of Product for Microsoft’s AI Platform.
Juarez explained that LLMs are simply a way to break down human language into words and vectors, which then get “fed into this gigantic math machine.” This machine then gives a probabilities of what the next sentence will be.
Juarez said, “Because it works this way, I stripped it of all the magic.” “And because i’ve stripped it off the magic, i know that it is a probabilistic procedure.” I need to make certain that the prompt I give it will maximize its ability to return the correct thing… Because I approach this with skepticism I know how to fine-tune the prompts so that it returns exactly what I want each time.
Narrowing your focus is the key to getting AI to work.
There’s a disconnect here between what people believe AI can do and what it can actually do. As I mentioned above, I found that LLMs didn’t work for me, and I told Juarez this. He said that people tend to try AI in a “wide, open way” but the most effective approach is to narrow it down.
You have to tune the prompt. You have to get the tools. And you have to have a model. This is a process for engineers, not consumers, as we have done. We’ve basically released an engineering problem to the consumers, and they say ‘well it doesn’t works’, and others say ‘well you’re holding the device wrong’, and to some degree, both are correct.
To those people like me, who haven’t experienced good results with AI, Juarez recommends avoiding general AI tools until they’re willing to put in the effort to get better outcomes.
I would advise consumers to look for AIs that can help them in their daily lives. Juarez said that you should start with those AIs. “General ones are also good, but only when the legwork is done for each individual piece.” It’s like relying on software that will do everything. “No one would buy that.”
Seth Juarez and Kedasha Kirr on stage at Build.
I think Juarez is on the right track to some extent. My experience with general AI tools like asking Microsoft Copilot and Google Gemini to perform tasks for me has not been good. After reflecting, I realized that some of the narrower use cases worked better.
I used Lex.pagea web-based AI-powered writing program, and was surprised by how helpful it turned out to be. Lex was not used to generate content. I enjoy writing and couldn’t imagine asking a machine do it. Lex offers AI prompts you can use to receive feedback on human-written content. These prompts were genuinely helpful and I was able to use them to improve my writing. I’ve had success with Gemini in Google Sheets for helping me figure out a formula I was using for a budget tracking tool. Gemini made a formula that was specific to my project and inserted it directly into the spreadsheet.
Overhyped in the industry
I also have some concerns. The environmental impact is the most important. It also bothers that these systems are often positioned as being able to do whatever users want by their makers, but require so much effort to be able even to do something remotely helpful. I don’t mind putting in a little effort, but the problem is that companies don’t make clear that this is required for a successful outcome. Juarez gave the impression that he was aware of the disconnect between what AI companies claim it can do and what people are able to do with it.
I think the industry has overhyped it. Juarez said that this has led to skepticism. “I’m under-hyping it, in a way that adds value, and I think that’s what we need to do.”
Juarez seemed genuinely excited by some of the developments in the AI field and what’s coming. Microsoft’s Build event was dominated by AI ‘agents.’ As Juarez explained, agents are a way to map human queries onto computer execution. The company demonstrated a lot agents interacting with other agents, which has a huge amount of potential.
The developer is also excited about what engineers and developers will do with the AI tools and agent tools Microsoft provides. Juarez believes that developers with “a healthy dose of skepticism,” will enter the space and “unleash calculated creativity.”
Juarez said, “I am amazed every day by the things that i’ve seen… like, oh you can make an LLM to do that.” “I think that the core understanding for myself is that everything that has to do with language can be converted into some sort of execution. If it can do that, then what can’t the tools and a healthy dose of skepticism do?” We aim to provide that.
MobileSyrup earns a commission on purchases made through our links. This helps fund the journalism provided free on our site. These links have no influence on our editorial content. Support us here