Anthropomorphizing Artificial intelligence: The consequences of mistaking human-like AI for humans have already been revealed

VentureBeat/Ideogram

Subscribe to our daily and weekly emails for the latest updates on AI. Learn More


We have fallen into a trap in our rush to understand AI and relate to it. We attribute human characteristics to robust, but fundamentally nonhuman systems. This anthropomorphizing AI is no longer a harmless quirk in human nature. It is a dangerous tendency that could cloud our judgement. Business leaders compare AI learning to human educational practices to justify training to lawmakers who craft policies based on flawed human AI analogies. This tendency to humanize AI could inappropriately influence crucial decisions across industries or regulatory frameworks.

Viewing AI from a human perspective in business has led to companies overestimating AI capabilities or underestimating the need for human supervision, sometimes with expensive consequences. The stakes are especially high in copyright laws, where anthropomorphic reasoning has led to problematic comparisons of human learning and AI-training.

The language trap

Listen carefully to the way we talk about AI. We use terms like “learns,” ‘thinks, ‘understands, ‘creates and more. These terms sound natural but are misleading. When we say that an AI model “learns”it does not gain understanding like a student. It performs complex statistical analysis on vast amounts data, adjusting the weights and parameters of its neural networks according to mathematical principles. There is no understanding, spark of inspiration, or eureka moment. Just increasingly sophisticated pattern matching.

The linguistic sleight-of-hand is more than just semantic. As noted in the article, Generative AIโ€™s Illusory Argument for Fair Use: “The use anthropomorphic terminology to describe the development of AI models and their functioning is misleading because it suggests that the model, once trained, operates independently of content of the works it has been trained on.” This confusion can have real consequences, especially when it influences policy and legal decisions.

The cognitive disconnect

One of the most dangerous aspects of anthropomorphizing AI, is that it obscures the fundamental differences between machine and human intelligence. While some AI systems excel in specific types of reasoning or analytical tasks, large language models (LLMs), which dominate today’s AI discourse and on which we focus here, operate through sophisticated pattern detection.

The systems are able to process huge amounts of data and identify and learn statistical relationships between words and phrases, images, and other inputs in order to predict the next step in a sequence. When we say that they “learn,” what we mean is a mathematical optimization process that helps them make more accurate predictions based upon their training data.

Take a look at this fascinating example from research by Berglund and colleagues: If an AI is taught that Valentina Terreshkova is the first woman to walk in space, then it will answer correctly “Who is Valentina? “but struggle to answer “Who is the first woman to walk in space?”. This shows the fundamental difference between pattern-recognition and true reasoning – between predicting likely word sequences and understanding their meaning.

The copyright conundrum (19659013) This anthropomorphic bias is particularly troubling in the ongoing debate on AI and copyright. Microsoft CEO Satya Nadella Recently, a study compared AI training to human learning. It suggested that AI should be capable of doing the same thing if humans are able to learn from books with no copyright implications. This comparison shows the dangers of anthropomorphic thinking when discussing ethical and responsible AI.

Some people argue that this analogy should be revised to better understand AI training and human learning. When we read books as humans, we don’t copy them. We understand and internalize the concepts. AI systems, however, must make copies of works, often without permission or payment, encode them into their architecture, and maintain these encoded versions in order to function. The works do not disappear after “learning,” like AI companies often claim. They remain embedded in the System’s neural networks

The business blind spot

By anthropomorphizing AI, we create dangerous blind spots that go beyond simple operational inefficiencies. When executives and decision makers think of AI in human terms as “creative” and “intelligent”it can lead to a cascade risky assumptions and possible legal liabilities.

Overestimating AI capabilities

A critical area where anthropomorphizing can create risk is in content generation and compliance with copyright laws. Businesses may incorrectly assume that AI generated content is free of copyright concerns because AI can “learn” like humans. This misunderstanding could lead to companies:

  • Deploy AI that accidentally reproduces copyrighted content, exposing them to infringement claims.
  • Failing to implement appropriate content filtering and supervision mechanisms.
  • Assuming incorrectly that AI is able to reliably distinguish between copyrighted and public domain material.
  • Underestimating the need for human review of content generation processes.

    The cross-border blind spot

    When we consider cross border compliance, the anthropomorphic bias of AI creates dangers. As explained by Daniel Gervais and Catherine Zaller Rowland. “The Heart of the Issue: Copyright AI Training and LLMs” Copyright law operates under strict territorial principles. Each jurisdiction maintains its own rules regarding what constitutes an infringement and what exemptions apply.

    The territorial nature of copyright laws creates a complex web for potential liability. Companies may mistakenly believe that their AI systems are able to “learn”or take in, copyrighted material from across jurisdictions. They fail to realize that training activities which are legal in one place can be considered infringements in another. The EU has recognized the risk in its AI Act. Recital 106requires that any general-purpose AI models offered in the EU must comply with EU copyright laws regarding training data, no matter where this training took place.

    This is important because anthropomorphizing AI can lead companies to misunderstand or underestimate their legal obligations across border. The fiction that AI “learns” like humans hides the fact that AI training involves complex copying, storage and other operations that trigger legal obligations in different jurisdictions. This fundamental misunderstanding about AI’s functioning, coupled with the territorial nature copyright law, creates serious risks for businesses that operate globally.

    The human cost

    Anthropomorphizing AI has a high emotional cost. People are increasingly forming emotional connections with AI chatbots and treating them as confidants or friends. This can be especially problematic. It is dangerous for vulnerable individuals to rely on AI as a source of emotional support, or to share personal information. The AI’s responses are based on sophisticated pattern matching and training data, not genuine empathy or emotional connection. This emotional vulnerability can also manifest itself in professional settings. As AI tools are integrated more into the daily work, employees may develop an inappropriate level of trust in them, treating them like actual colleagues instead of tools. They may share confidential information about their work too freely or be reluctant to report errors because of a misplaced loyalty. These scenarios are isolated for now, but they show how anthropomorphizing artificial intelligence in the workplace can cloud judgment and create unhealthy dependence on systems who, despite their sophisticated response, are incapable to truly understand or care.

    How do we move on? We need to be more precise with our language when we talk about AI. We could say that an AI “understands” or “learns”but instead we would say that it “processes” data or “generates” outputs based upon patterns in its training data. This is not pedantic, it clarifies what these systems are doing.

    Secondly, we should evaluate AI systems on the basis of what they actually are and not what we imagine them as. This means recognizing both their impressive capabilities as well as their fundamental limitations. AI can process huge amounts of data to identify patterns that humans might miss. However, it cannot reason, create or understand in the same way as humans.

    Lastly, we must develop policies and frameworks that address AI’s real characteristics rather than imagined qualities of humans. This is especially important in copyright law where anthropomorphic reasoning can lead to faulty analogies and inappropriate legal conclusion.

    The way forward

    With AI systems becoming more sophisticated in their ability to mimic human outputs, there will be a greater temptation to anthropomorphize these systems. This bias is evident in everything from the way we evaluate AI’s capabilities, to how we assess its risk. We have seen that it can lead to significant practical challenges in copyright law and compliance with business regulations. We must first understand the fundamental nature of AI systems and how they store and process information.

    Understanding AI as it is — sophisticated information-processing systems, and not human-like learner — is crucial to all aspects of AI deployment and governance. By moving beyond anthropomorphic thinking we can better address challenges with AI systems. From ethical considerations and safety risk to cross-border compliance with copyright and training data governance. This better understanding will allow businesses to make more informed decisions, while also supporting policy development and public discussion around AI.

    We will be better equipped to navigate the profound societal implications of AI and its practical challenges for our global economy if we embrace AI as it is.

    Roanie is a licensing and legal adviser at CCC

    Welcome to VentureBeat Community! DataDecisionMakers allows experts, including those who work in the data field, to share their data-related insights. Join us at DataDecisionMakers to learn about the latest information and cutting-edge ideas, as well as best practices and the future of data technology and data. You could even contribute an article!

    Read more from DataDecisionMakers.


www.aiobserver.co

More from this stream

Recomended


Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464