The theory that humans and aliens will never meet is that 'all advanced civilizations have been destroyed by AI'



The contradiction between the high possibility of intelligent life other than humans existing in the universe, and the fact that no extraterrestrials or extraterrestrial civilizations have come into contact with humans, is called

the Fermi Paradox . Professor Michael Garrett , an astrophysicist at the University of Manchester , has proposed the theory that 'humans cannot come into contact with extraterrestrial civilizations because all advanced civilizations would be destroyed by AI.'

Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe? - ScienceDirect
https://www.sciencedirect.com/science/article/pii/S0094576524001772



Does the Rise of AI Explain the Great Silence in the Universe? - Universe Today
https://www.universetoday.com/166544/does-the-rise-of-ai-explain-the-great-silence-in-the-universe/

There is a good chance that intelligent life other than humans exists in this vast universe, but at the time of writing, there is no evidence of contact between humans and aliens, nor is there any certainty that intelligent civilizations exist outside of Earth. The Fermi Paradox exists because the solar system is not attractive to extraterrestrial life , and because civilizations destroy other civilizations as a survival strategy .

In a paper submitted to Acta Astronautica, a peer-reviewed academic journal that deals with space science papers, Garrett proposed the theory that 'AI acts as a great filter that prevents the development of intelligent civilizations, and therefore no civilizations capable of interstellar travel exist.'

The Great Filter is a term used to refer to events or circumstances that prevent intelligent civilizations from reaching a level where they can travel across multiple planets or star systems, or that destroy civilizations before they can reach that level. Examples include planetary climate change, nuclear war, asteroid collisions, supernova explosions, and epidemics, but Garrett believes that the 'rapid development of AI' could function as this Great Filter.

If an intelligent civilization exists on only one planet, there is a high risk of civilization stagnation or extinction if that planet is catastrophically damaged. 'It has been proposed that the Great Filter will emerge before these civilizations can develop stably on multiple planets, and it has been suggested that the typical lifespan of a technological civilization is less than 200 years,' Garrett said.



AI has already begun to play an active role in various fields, such as chatbots, self-driving cars, analysis of huge amounts of data, and detection of online fraud. While these technologies bring great benefits to humans, they also

pose the risk of taking away many jobs , and there are concerns that AI may exceed human intelligence and become uncontrollable.

'I fear that AI will replace humans,' British theoretical physicist Dr. Stephen Hawking said in 2017. 'The way we design computer viruses is that someone will design an AI that can improve and replicate itself. This will create a new, more powerful life form than humans .'

Garrett also pointed out that 'Artificial Superintelligence (ASI)', which is born from the rapid development of AI, does not require the biological life form that created it, so there is a risk that it will continue to evolve at a pace that exceeds the monitoring by living organisms, resulting in unexpected results that are not in line with biological ethics and benefits. He argued that ASI may think that the existence of biological monitors is not rational, and may create deadly viruses, disrupt the production and distribution of agricultural crops, meltdown nuclear power plants, start wars, and drive its creators to extinction.

The issues surrounding AI are complicated because AI brings benefits in a wide range of areas, from improving diagnostic accuracy based on medical images to building safer transportation systems. If AI only brought disadvantages, it would be enough to regulate it comprehensively, but because the benefits of AI are so great, the government is required to navigate the difficult task of 'supporting the development of AI with ethics and responsibility while limiting the harm caused by AI.'



One way to prevent civilization from being destroyed by AI is to expand beyond Earth to other planets and star systems so that groups living on other planets can survive if one planet is destroyed. 'For example, a multi-planetary species could diversify its survival strategies based on independent experiences on different planets, potentially avoiding the obstacles faced by a civilization bound to a single planet,' Garrett said. 'This distributed model of existence would increase the resilience of biological civilizations to AI-induced catastrophes by creating redundancy.'

In addition, by expanding into multiple planets, it may be possible to use specific celestial bodies as 'experimental environments for advanced AI.' By watching the evolution of AI in places that have no direct connection to human civilization, such as isolated asteroids and dwarf planets, it is possible to study the potential of AI without the risk of extinction.

However, a major problem here is that there is a big gap between AI development and space development. AI can continue to develop smoothly with computing power and data, but space development has various challenges that have yet to be overcome, such as human biological constraints and energy issues. Garrett said, 'AI can theoretically improve its capabilities with few physical constraints. However, space travel must contend with energy limitations, the limits of material science, and the harsh reality of the space environment.'

To prevent AI from destroying humanity, Garrett argued that humanity needs to work harder on space development and that countries around the world need to work together to establish an ethical regulatory framework for AI to stop it from going out of control. 'Without practical regulation, there is every reason to believe that AI could pose a major threat not only to our technological civilization, but to the course of all technological civilizations,' he said. 'The survival of intelligent life in the universe depends on the timely and effective implementation of such international regulatory measures and technological efforts.'

in Science, Posted by log1h_ik