Ethicist warns that relying too much on AI could reshape our identities without us even realizing it



In recent years, the rise of AI has expanded the possibilities of technology, but at the same time, it has also raised moral, ethical, and philosophical questions. In '

AI Morality ,' a book that includes many essays on the moral dilemmas that AI poses, Muriel Leuenberger, who studies AI ethics at the University of Zurich in Switzerland, argues that 'if we rely too much on AI, there is a danger that our own identity will be reshaped without our consent.'

AI 'can stunt the skills necessary for independent self-creation': Relying on algorithms could reshape your entire identity without you realizing | Live Science
https://www.livescience.com/technology/artificial-intelligence/ai-can-stunt-the-skills-necessary-for-independent-self-creation-relying-on-algorithms-could-reshape-your-entire-identity-without-you-realizing



In modern society, all kinds of services and apps collect information such as 'who are you friends with?', 'who did you talk to?', 'where did you go?', 'what music, movies, and games do you like?', 'what news did you read?', 'what did you buy with your credit card?', etc. This information is already being used to make recommendations using AI, and large companies such as Google and Facebook can predict a person's political opinions, consumer preferences, personality, employment status, whether they have children, risk of mental illness, and more.

As the use of AI and the digitalization of our lives continues to advance, we are moving closer to a future in which AI knows us better than we know ourselves. 'The personal user profiles that AI systems generate may be able to describe a person's values, interests, personality traits, prejudices, and mental illnesses better than the user himself can,' Leuenberger said. 'Already technology can provide personal information that even the person himself did not know.'

If AI knows us better than we know ourselves, it seems reasonable to rely on it to help us choose the partner or friends we'll have, the next job we'll take, the party we'll attend, the house we'll buy, etc. But Leuenberger argues that relying too much on AI has two problems: trust and our own creativity.



◆How can we trust AI?
For example, if friend A introduces you to potential lover B, you will probably consider whether you can trust friend A before meeting potential lover B. If friend A is drunk, alcohol may be clouding their judgement, and if friend A's own romantic experiences have been unsuccessful, you may want to be cautious. Also, how much friend A knows about potential lover B and why they introduced him to you are also important factors to consider.

Taking these factors into account is difficult even when the other party is a human, but it becomes even more difficult when the recommendation is made by an AI. It is difficult to know how well an AI knows itself and whether the information it has is trustworthy, and many AI systems have been found to

be biased , so it is wise to avoid blindly trusting AI.

In addition, while you can ask a human, 'Why did you think that?', you often cannot ask AI recommendation systems that do not have a chat function, making it difficult to evaluate the reliability, capabilities, and intent of the developer of the AI. The algorithms behind AI decisions are generally the property of the company and cannot be accessed by users, and even if they could be accessed, they would be difficult to understand without specialized knowledge. Furthermore, AI behavior has a 'black box' nature that even developers cannot understand, making it nearly impossible to interpret its intent, says Leuenberger.



AI will take away our ability to create our own identity
Even if fully trustworthy AI were to emerge, Leuenberger argues, there would remain concerns about 'people's ability to create their own identity.' AI that tells people what to do is built on the idea that 'identity is information that users and AI have access to.' In other words, the idea is that we can determine who someone is and what they should do through statistical analysis, personal data, psychology, social institutions, human relationships, biology, economics, and other facts.

However, this overlooks the fact that 'people are not passive agents of identity, but that their identities are actively and dynamically created and chosen by themselves.' Philosopher

Jean-Paul Sartre advocated existentialism, which holds that ' existence precedes essence ,' and argued that people have the freedom to imagine their own identity of their own volition and act toward it.

'We are constantly creating ourselves, and this must be free and independent,' says Leuenberger. 'Within the framework of certain facts - where you were born, how tall you are, what you said to your friend yesterday - it is fundamentally free and morally required to construct your own identity and define what is meaningful to you. Most importantly, the goal is not to discover the one and only correct way to be, but to choose your own identity and take responsibility for it.'

While AI can provide a quantified view of a person and a set of guidelines for action, it is still up to each individual to decide how they will act and who they will become through that. By continuing to blindly trust and follow AI, we are giving up the freedom and responsibility to create our own identity.

Leuenberger said that constantly using AI to find things like what music to like, what jobs to take, what politicians to vote for, etc. could stunt the skills needed to create an independent identity. 'Making good choices in life and building an identity that's meaningful and makes you happy is a great accomplishment. By subcontracting this power to an AI, you slowly lose responsibility for your life and who you are,' he said.



Sure, following AI recommendation systems may make our lives easier, but it also comes with the risk of ceding control over our identity to large tech companies and organizations.

Choosing something for yourself can lead to failure, but being exposed to things that don't match you or being thrown into an environment that you're not comfortable with can also be an opportunity for growth. 'Moving to a city that you're not good at disrupts your rhythm of life, which can encourage you to look for new hobbies,' says Leuenberger. 'Always relying on AI recommendation systems can lead to your identity becoming fixed.'

The fixation of identity caused by reliance on AI is further strengthened when AI profiling becomes a 'self-fulfilling prophecy': as your identity is reshaped according to AI predictions, the products it recommends increasingly match your preferences, perpetuating the identity it has shaped.

To prevent AI from reshaping our identities, Leuenberger encourages us to set aside recommendation systems and choose our own entertainment and activities. This requires prior research and may be uncomfortable at times, but it also gives us the opportunity to grow and develop our own identity.

in Note, Posted by log1h_ik