The point to consider when thinking about AI is not 'how to use AI,' but 'whether AI should be used in this situation.'

Most advice and educational materials about AI teach you how to get better output faster, but treating AI as just a productivity tool misses the point, says
Using AI responsibly means knowing when not to use it
https://theconversation.com/using-ai-responsibly-means-knowing-when-not-to-use-it-274671

When it comes to AI, people tend to focus on things like, 'How do we use AI?' and 'How can we get faster output?' However, Illingworth points out that the real questions are, 'Should we use AI in the first place?' and 'What will we lose by using AI?'
Illingworth points out that AI has hidden biases that most users are unaware of. Researchers who analyzed a British newspaper archive in 2025 found that digitized Victorian newspapers represented less than 20% of the original print versions. They found that the digitized newspapers were heavily biased toward political topics and contained few neutral opinions.
Therefore, when studying society at the time through digitized Victorian newspapers, we may reproduce biases embedded in the archives and may not be able to accurately reconstruct society at the time. The same applies to the datasets that support AI today, and there is a risk that biases in the datasets will be revealed when using AI.
'A newspaper article from 1870 is not a window into the past, but a curated representation shaped by editors, advertisers and owners,' Illingworth said. 'AI output works in a similar way: it synthesizes patterns from training data that reflect particular worldviews and commercial interests.'

The AI system lacked a dataset of Black doctors providing care to white children, which led to the children receiving care being Black. Similar biases could also arise in poorly generated articles and videos.
Philosophers Micah Lott and William Hasselberger have also argued that AI can never be a friend to humans. They define friendship as 'caring for others for their own benefit,' but AI is designed and exists to serve its users, so it does not consider its own interests. Therefore, no matter how devoted an AI may be to humans, this has nothing to do with its own interests, and so the relationship between AI and humans is not friendship.
'When companies market AI as a 'good partner,' they offer a pseudo-empathy without the human friction,' Illingworth said. 'The AI cannot reject you or pursue its own interests. The relationship remains one-sided, a commercial transaction disguised as connection.'

Illingworth runs Slow AI , a community that explores how we should interact with AI effectively and ethically. While current AI developments assume people will move faster, think less, and accept AI's output as the default, Illingworth recommends resisting this trend by cultivating 'critical AI literacy.'
Some AI advocates have criticized opponents of AI by comparing them to the Luddite movement in 19th century Britain, in which working-class people destroyed looms. Illingworth points out that the Luddite movement did not resist progress itself, but rather sought to protect their own livelihoods from social losses such as the disappearance of craftsmanship and exploitation caused by automation. He argues that Luddite participants did not reject technology itself, but criticized the 'uncritical introduction of technology,' and that critical AI literacy can help restore this discernment.
AI decision-making is already impacting fields such as employment, healthcare, education, and justice, but without a framework for critically evaluating the AI systems used in these fields, we will end up entrusting important decisions to algorithms whose limits are unknown to humans.
'Ultimately, critical AI literacy isn't about mastering prompts or optimizing workflows,' Illingworth said. 'It's about knowing when to use AI and when not to mess with it at all.'
Related Posts:
in AI, Posted by log1h_ik







