Will we ever be able to trust robots?

It might seem that the world is on the verge of a humanoid robot heyday. New breakthroughs in artificial-intelligence promise capable, general-purpose robotics, previously only seen in science fiction. Robots that can do such things as assemble cars, take care of patients, or clean our homes without being given specific instructions.

This idea has garnered a lot of attention, capital and optimism. Figure raised $675 Million for its humanoid robotic in 2024. This was less than two years after the company was founded. At a Tesla event in October, the Optimus robots were more impressive than the self-driving cab that was supposed to be the highlight of the event. Elon Musk believes that these robots can help create “a future without poverty.” They could be used in our homes, warzones, workplaces and borders, as well as schools and hospitals, to perform roles such as therapists and carpenters. Recent progress is arguably more about style than substance. AI advancements have made robots easier for humans to train. However, they still do not allow them to “think” about what to do next or act on those decisions as some viral videos may suggest. In many of these demos (including Tesla), a robot pouring a beverage or wiping a counter is not acting independently, even if that is what it appears to do. Teleoperation is a roboticist term for remote control by humans. The futuristic look of these humanoids, which borrows from dystopian Hollywood sci fi tropes such as screens for faces, sharp eyeballs, and towering metallic forms, suggests that the robots are capable.

Leila Takayama is a robotics specialist and vice president for design and human-robot interactions at warehouse robotics company Robust AI. She says, “I’m concerned that we’re reaching peak hype.” “There’s an arms race–or a war of humanoids–between the big tech firms to show they can do more or better.” She says. As a result any roboticist who is not working on a robotoid must answer to investors why. Takayama said to me, “We need to talk about these things now. We didn’t need to do so a year ago.”

Shariq Hashed, a former employee at both OpenAI, and Scale AI entered his robotics company Prosper in this arms race by 2021. The company is creating a humanoid robotic assistant called Alfie that will perform domestic tasks at homes, hospitals and hotels. Prosper hopes to sell Alfies at a price of between $10,000 and $15,000 per unit. Guy Hoffman, Cornell University associate professor

When designing Alfie, Hashme identified the trustworthiness factor as the most important consideration. Hashme believes that one way to gain people’s trust in Alfie is by creating a character that is human-like, but not too human.

It’s not just about Alfie’s looks. Hashme and colleagues are imagining how the robot will move and signal what he will do next. They also imagine desires and flaws which will shape his approach to the tasks. And they’re creating an internal code of ethical conduct that will govern the instructions that he will accept or not accept from its owners.

It seems premature to rely so heavily on Alfie’s trustworthiness. Prosper, a startup with a small amount of capital in comparison to giants such as Tesla or Figure, is still months (or even years) away from shipping its product. The need to tackle trustworthiness early and head-on reflects the messy situation humanoids are in. Despite all of the research and investment, few people feel comfortable with a robot like this if it walks into their living room. We’d be curious about the data it recorded about us and our environment, afraid it could one day take our jobs, or turned off by the way it moved. Humanoids are not elegant and useful. They can often be cumbersome and creepy. Humanoids will not be able to live up to the hype until they overcome this lack of trust.

On the road to help Alfie win our confidence, one question stands out more than any other. How much can he do on his own. How much will he rely on humans still?

New AI methods have made it easier to train robots using demonstration data. This is usually a combination of images, video, and other data that are created by humans performing tasks such as washing dishes while wearing sensors which pick up their movements. These data can be used to coach robots in the same way that a large corpus of text can assist a large language modeling system create sentences. This method still requires a lot of data and many humans to correct errors.

Hashme said that he expected Alfie to be able to perform only 20% of tasks by himself in the first release. The rest of the tasks will be handled by a Prosper “remote assistant” team, at least some of whom are based in the Philippines. They will be able to remotely control Alfie. Hashme cited Scale AI’s success when I asked, among other things, if it was viable for a robots business to rely so heavily on manual labor. This company, which processes data for AI applications and has a large workforce in the Philippines, is often criticized because of its labor practices. Hashme managed this workforce for around a year prior to founding Prosper. His departure from Scale AI, which was triggered by a breach of trust for which he served time in federal prison, was also a result of that violation.

Alfie’s success or failure will reveal a lot about the willingness of society to accept humanoid robotics in our private spaces. Can we accept an asymmetrical and new labor arrangement where workers in low-wage nations use robotic interfaces in order to perform physical tasks at home for us? Will we trust that they will protect our private data and images? Will robots be useful at all?


Hashme brought Buck Lewis in to address some of these trust concerns. Lewis was faced with a rat two decades before he worked with robots and before he was asked to design a humanoid people would trust instead of fear.

Lewis was a renowned animator in 2001 and one of Pixar’s top minds. His specialty was creating characters with a deep, universal appeal. This is a major concern for studios who fund high-budget, global projects. Lewis’s niche led him to create characters for DreamWorks and Disney movies, including Cars. Jan Pinkava, the man behind Ratatouille told Lewis his pitch for the film, which was about a rat that wanted to be a cook, it seemed impossible. Humans are so frightened of rats that the word itself has become a synonym for someone who is not trustworthy. Lewis turned a rodent that was vilified into a chef who people loved. “It is a deeply ingrained fear, because rats are horrifying,” Lewis told me. “For this to be successful, we had create a character who rewired people’s perceptions.” To do that, Lewis spent lots of time in his mind, imagining scenes such as a group of rat hosting a playful, pop-up dinner in Paris. Remy was born, a Parisian Rat who rose to the top of the culinary world in Ratatouille and was so adorable that the demand for pet rats soared worldwide after the film was released in 2007. Lewis, who has been in the film industry for 20 years, is now responsible for crafting Alfie’s character. Alfie is Lewis’s attempt at changing the perception of humanoid robotics from futuristic, dangerous, and untrustworthy to helpful and trustworthy.

Prosperโ€™s approach reflects a robotics concept articulated Rodney Brooks a founder of iRobot who created the Roomba. “The visual appearance makes a promise as to what it can do and its intelligence.” It must deliver on or slightly exceed that promise, or it won’t be accepted.

This principle states that any humanoid robotic device promises to behave like a person. Some firms find it so high that they reject it. Some humanoid skeptics question whether a robot that is helpful should resemble a person at all, when it can accomplish practical tasks without anthropomorphizing. Guy Hoffman, an associate professor of Cornell University’s Engineering School and a roboticist who specializes in human-robot interaction, asks: “Why do we love the idea of creating a replica of ourselves?”

(
]Early prototypes for Prosper’s robot butler. It could perform household chores like cleaning the kitchen table, washing dishes, and disposing of trash.

DAVID MINTINER

The main argument for robots that have human characteristics is a practical one: our homes and offices were designed by and for us, so a robotic form that resembles a human will be able to navigate them better. Hoffman thinks there is another reason. “Through the use of a humanoid form, we are selling an idea about this robot, that it can do things like we can.”

Prosper borrowed some features of humanoid designs but rejected others when designing Alfie’s appearance. Alfie, for instance, has wheels instead of feet, because bipedal robots tend to be less stable in homes. However, he still has arms and a face. The robot will be constructed on a vertical column resembling a torso. His exact height and weight have not been revealed. He will have two emergency stops buttons. Lewis says that nothing about Alfie’s design is going to try and hide the fact that he’s a robot. “The antithesis [of trustworthiness] is designing a robot intended to mimic a human… and its success is measured by how well it has fooled you,” he said. “Like ‘Wow, it was only five minutes ago that I didn’t know it was a robot. That’s dishonest,” he told me.

But many other humanoid innovations are headed in a direction which deception appears to be an increasingly appealing concept. Disney revealed that the ultrarealistic robots in 2023 were just people dressed in suits. The stunt was to promote a film. After a video went viral, Disney revealed that they were in fact people in suits. Researchers from the University of Tokyo revealed a way of attaching engineered skin that used human cells over the face of a robotic in an effort to more closely resemble a real human face nine months later.

Through this kind of humanoid, we are selling the story that [that this robot] in some way is equivalent to us or the things that can be done.

Guy Hoffman

Lewis had considered much more than Alfie’s looks. He and Prosper see Alfie as a robot ambassador who represents a future in which robots embody the best of humanity. He is neither young nor old, but has the wisdom and experience of middle-age. His primary purpose in life is to serve people according to their needs. Alfie, like any compelling character, has flaws that people can relate to. He wishes he was faster and is a bit obsessed with completing the tasks asked of. Alfie’s core tenets are to respect boundaries, be discrete and nonjudgmental and to earn trust. Lewis says

: “He is a nonhuman entity, but he does have a kind of sentience.” “I’m trying to avoid looking at it as directly comparable to human consciousness.”

I’ve been referring to Alfie as “he”–at the risk of over-anthropomorphizing what is currently a robot in development–because Lewis pictures him as a gendered male. When I asked him why he pictured Alfie as a male, he said that it was probably a relic of the archetypal butlers on television shows he watched growing up like Batman. In a conversation I had with Hashme, it was revealed that there is a real butler who in some way is an inspiration for Alfie.

Heslop is a highly experienced hospitality trainer with decades of experience. For seven years, he was the sole person in the United States Department of Defense who was qualified to train the household managers that would run the homes of four- and three-star generals. Heslop is now the household manager of a wealthy Middle Eastern family (he declined to give more details) and has been hired by Prosper to help Alfie with his approach to service in the home. Heslop elaborated on the meaning of excellent service shortly after our conversation. He quoted Steven M. Ferryโ€™s book Butlers & Household Managers 21st Century Professionals (19459021]to say, “That’s what a good butler does: create beautiful moments that put people at ease and increase their enjoyment.” He spoke with conviction of the impact that great service can make on the world, and how protocol and etiquette are able to bring down the egos even of the most powerful dignitaries. Heslop cited a quote attributed to Mahatma Ghandi, saying, “The best thing you can do to find yourself is lose yourself in service to others.”

Despite his lack of experience in robotics and household robots, Heslop believes that Prosper’s priorities are the right ones to achieve this. He says that privacy and discretion, attention and detail, and meticulous eye for that are critical to the company’s overall objective. “And, more importantly, Alfie.”

It’s one thing to draw an Alfie on paper, but another to build him. In the real world the first version Alfie will rely on remote assistants who are mostly working overseas to handle about 80% of his household tasks. These assistants will use interfaces similar to video-game controls to control Alfie’s movement, relying upon data from his cameras and sensors to guide them when washing dishes or clearing the table.

Hashme said that efforts are being made in order to conceal or anonymize data that reveals personal information while the robot is teleoperated. This will include removing sensitive objects or people’s faces and allowing users the option to delete any footage. Hashme wrote that Alfie would “often simply look away” from any potentially private activity.

AI has a horrifying track record in regards to workers in low wage countries performing the hidden work required to build cutting edge models. Workers in Kenya were paid less than $2 per hour to manually purge toxic data for OpenAI, including content that described child sexual abuse and torturing. According to a Washington Post investigation, Hashme’s Scale AI operation in the Philippines was criticized by rights groups in 2023 for not adhering to basic labor standards, and failing to pay employees properly and on time. OpenAI stated that such work “needs be done humanely and voluntarily” and that the company established “ethical standards and wellness standards for its data annotators.” Scale AI responded to questions regarding criticisms of their operation in the Philippines by writing, “Over the last year alone, we have paid out hundreds millions in earnings to our contributors, giving them flexible work options and economic opportunities,” and that “98% support tickets regarding payment have been successfully resolved.” Hashme claims he wasn’t aware of the accusations against Scale AI In an email, Hashme said, “We did make mistakes which we quickly rectified and generally took quite serious.” I asked what lessons he learned from the allegations made against Scale AI, and other companies that outsource sensitive data work.

Shariq Hahme, former employee of OpenAI and Scale AI entered his robotics company Prosper in the humanoid arm race in 2021.

DAVID VINTINER.

Hashme said, “A lot companies that do this kind of stuff end doing it in a manner that is kind of shitty to the people who are employed.” He said that such companies often outsource HR activities to untrustworthy foreign partners or lose workers’ confidence through bad incentive programmes. According to court documents, Scale discovered in May 2019 that someone had repeatedly transferred $140 worth of unauthorized payments to multiple PayPal accounts. The company contacted FBI. Over the course five months, the company lost approximately $56,000. Investigations revealed that Hashme was the culprit behind the withdrawals. In October of that same year, Hashme pleaded guilty to a single count of wire fraud. Alexandr Wang, now billionaire CEO and founder of Scale AI wrote a letter in support of Hashme before his sentencing. So did 13 other former or current Scale employees. Wang wrote: “I believe Shariq has a genuine remorse for his crime and I have no reason that he will do this again.” He also said that the company would have not wanted Hashme to be prosecuted had it known who it was. Hashme was fired, lost his stock options and Scale sponsored his green card application. According to Wang’s note, Scale offered him $10,000 in severance before leaving. He declined. Hashme returned the money in 2019 and was sentenced in February 2020 to three months federal prison. He served this sentence. Wang is now the primary investor in Prosper Robotics along with Ben Mann (cofounder at Anthropic), Simon Last, (cofounder at Notion), and Debo Olaosebikan, (cofounder CEO of Kepler Computing).

I had a major lapse of judgment when i was younger. I was going through a personal crisis and stole from my boss. Hashme responded to questions about his crime in an email. “The consequences and realization of what I had done came as shock and led to lots of soul-searching.” He wrote that “at Prosper, we’re taking trustworthiness to be our highest aspiration.” It could mean that highly localized physical tasks that we think are immune to being moved offshore, such as cleaning hotel rooms or caring hospital patients, might one day be performed by workers abroad. It seems to be against the very idea that a robot can be trusted, as the machine’s efficiency would be tied to a faceless employee in another country who is likely to receive paltry wages.

Hashme spoke of using a portion Prosper’s profit to make direct payments for people whose jobs were affected or replaced by Alfies. However, he did not have any specifics about how this would work. He is also still pondering the question of who or what Prosper customers should trust when they let its robot into their homes. “We don’t think you should have to put as much trust in our company or the employees,” he says. “We would rather you put your trust in the device. The device is the robot. And the robot is making certain the company doesn’t get caught doing something they aren’t supposed to.”

Read More

More from this stream

Recomended


Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464