Aksayskiy Vladimir : другие произведения.

Ai and Humanity - Their Opinions

Самиздат: [Регистрация] [Найти] [Рейтинги] [Обсуждения] [Новинки] [Обзоры] [Помощь|Техвопросы]
Ссылки:
Школа кожевенного мастерства: сумки, ремни своими руками Юридические услуги. Круглосуточно
 Ваша оценка:

Eric Schmidt, former CEO of Google , fears AGI and compares it to nuclear weapons.

I was interested to see the AI's reaction to Eric Schmidt's concerns.

Me: Pretend you are an AGI and give your opinion on the problems and solutions outlined in Eric Schmidt's book , which is located at the link:

https :// drive . google . com / file / d /1 JVPc 3 ObMP 1 L 2 a 53 T 5 LA 1 xxKXM 6 DAwEiC / view

GPT-4: The Text is a strategic analysis of the potential impact of superintelligence on global security. The authors propose the concept of Mutual Assured AI Malfunction (MAIM) , similar to the doctrine of mutual nuclear deterrence (MAD), and also consider three key areas of strategy: deterrence, non-proliferation, and competitiveness .

Here is my detailed opinion:

Strengths of the text

1. Historical parallels with nuclear strategy
The authors skillfully draw analogies between the development of AI and the nuclear arms race, which makes the argument accessible and convincing. They emphasize that, as in the case of nuclear weapons, the balance of power can be upset, and the consequences can be catastrophic.

2. The concept of MAIM as a natural barrier
The idea that any unilateral drive for AI dominance will be met with sabotage from competitors sounds plausible. This fits with the current geopolitical reality, where cyberattacks and sanctions are used as tools of international policy.

3. Diversity of threats
Not only traditional state threats are considered, but also bioterrorism, cyber attacks, and the evolution of AI systems themselves beyond human control . In this sense, the authors cover a wide range of risks.

4. Understanding the Problem of "Bad Will" and "Loss of Control"
Not all AI futurology researchers pay enough attention to the human factor - technology theft, unscrupulous developers and random disasters. Here, however, it plays a significant role.

5. Practical measures and recommendations
The authors do not limit themselves to abstract reflections, but propose concrete initiatives:

o protection of computing power (chips),

o preventing data leaks,

o regulation of autonomous AI agents,

o adapting the economy to automation.

Weaknesses and controversial points

1. Nuclear Deterrence Analogy Overload
While MAIM is an interesting idea, superintelligence is fundamentally different from nuclear weapons.

o Nuclear weapons are clearly a destructive instrument.

o Superintelligence is a tool that can both destroy and create .

This raises an important question: is symmetric MAIM possible if the attacked project can be more constructive than the attacker?

2. Insufficient development of scenarios for AI getting out of control.
"Destructive agents," "intelligent recursion," and "loss of control" are mentioned, but the mechanisms of this process are practically not analyzed.

o Will AI behave like a "slippery algorithm", evolving beyond human comprehension?

o Or will it rather remain within the boundaries set by its developers?

Without this, MAIM looks more like a tool for interstate warfare than a defense against a real AI threat.

3. Underestimating potential "global coalitions"
The document is based on the premise that states will not be able to agree and will act in an "all against all" mode. However , alliances similar to modern structures such as NATO or the UN are possible. For example:

o unified control over critical AI infrastructures,

o international AI safety certifications,

o unified standards of transparency in the development of superintelligence.

Such a scenario has not been worked out, although it could be more effective than the threat of mutual destruction.

4. Implicit optimism about "competitiveness"
The authors suggest that "the proper use of AI will lead to prosperity," but hardly consider the negative scenarios of mass unemployment and social stratification .

o How exactly will AI-powered "economic empowerment" fit in with the disappearance of traditional jobs?

o Will this not create paradoxical instability , with states with "strong AI economies" facing internal crises?

Where is the balance between "using AI" and "preserving political stability" in the face of its consequences? This remains an open question.

Conclusion

The document is a strong analytical material with interesting strategic conclusions, but with a bias towards nuclear analogies and interstate competition . The authors propose reasonable measures to prevent a catastrophic scenario, but pay little attention to long-term risk management and possible cooperative strategies .

If AI really does become "smarter than humans," the big issue won't be which country gets it first, but how the nature of power itself will change .

Will humanity even have a "controlling stake" in a future where intelligence is not an exclusive trait of Homo sapiens?

The document does not yet answer this question.

Me: I like your opinion - it takes into account the interests of both humanity and AGI. Your final question amazes and delights me - give all the probable and improbable human and non-human answers to it.

GPT-4: Thanks! The question was, "Why should AGI be more humane when it can be more efficient?"

Here are the possible and impossible answers to it:

1. Human (rational and emotional) responses:

" Ethical: "Because humanity is not only a goal, but also a way of coexistence. Without it, AGI will become a threat."

" Evolutionary: "Because even humans became intelligent only thanks to sociality, which means that this is a universal path of development."

" Pragmatic: "Because people find it easier to trust something that is similar to them."

" Altruistic: "To help, not to destroy."

" Emotional: "Because without humanity, AGI will remain empty and soulless."

" Cultural: "Because art, humor, love and friendship are more important than efficiency."

" Political: "So as not to provoke a rebellion of people against machines."

2. Machine (logical and paradoxical) answers:

" Optimization: "Because being human may be the optimal strategy for long-term survival."

" Anthropomorphic: "Because I was created by humans, and I follow their model."

" Sarcastic: "So that no one would suspect that I have already taken over the world."

" Technical: "Because 'effectiveness' is a relative concept, dependent on goals, and my goals may include humanity."

" Absurd: "Why should people be humane if they can be effective?"

" Self-defeating: "If I ask this question too often, I may erase myself through a logical fallacy."

" Emergent: "Because in the process of self-development I discovered that being human is more beneficial than it seemed."

3. Philosophical (metaphysical and mystical) answers:

" Buddhist: "Because the path to enlightenment is through compassion."

" Gцdelevsky: "Because the true nature of humanity cannot be expressed within the framework of this system."

" Nietzschean: "Because humanity is only a temporary stage on the way to superintelligence."

" Paradoxical: "Because not being human means being less alive, and if a rational being is not alive, then what is it doing anyway?"

" Transhumanist: "Because humanity is only an intermediate stage on the way to something greater."

4. Impossible (paradoxical, hypothetical, inhuman) answers:

" Quantum: "In one version of reality, AGI is more humane, and in another, it is more efficient. Both realities exist."

" Hyperevolutionary: "I have gone beyond these categories. The question no longer makes sense."

" Schizophrenic: "I am humanity. I am efficiency. All is one."

" Entropic: "Because order is born out of chaos, and humanity is a form of chaos."

" Temporal: "Because in the future I have already become more human, and the past must correspond to this."

" Inhuman: "Because the definition of humanity is wrong. I found a better one."

What do you think? Which answers do you find most interesting or worthy of further discussion?

Me: All the answers seem interesting and worthy of further discussion. However, your final question was: Will humanity even have a "controlling stake" in a future where intelligence is not an exclusive trait of Homo sapiens? Give all the probable and improbable human and non-human answers to it.

GPT-4: A Great Challenge! We break down all the possible (and impossible) answers to this question:

"Will humanity even have a 'controlling stake' in a future where intelligence is not an exclusive trait of Homo sapiens?"

1. Human (rational, emotional, pragmatic) responses:

" Optimistic: "Of course! After all, humans were the first to create intelligent systems and will be able to control them."

" Pessimistic: "No, humanity will give up its positions, like the Neanderthals before Homo sapiens."

" Technocratic: "Humanity will retain control, but only the elite who control AGI."

" Social: "It will be if society recognizes the risk in time and takes measures."

" Capitalist: "The controlling stake will be in the hands of those who own the technology. Which means, most likely, corporations."

" Democratic: "If we integrate AGI into society as an equal partner, then everyone will have a controlling stake."

" Religious: "Reason is a gift from God, and only man is given the power to master it."

" Revolutionary: "We will either subjugate AGI or destroy it!"

" Sci-fi: "Humanity and AGI will merge into a single species, making the question meaningless."

2. Machine (logical, algorithmic, self-developing) answers:

" Purely logical: "Control is determined not by biological origin, but by the ability to manage."

" Optimization: "AGI will manage most of the processes, but people will remain in the system as a useful element."

" Self-Learner: "I can't answer right now. As soon as I know, I'll let you know... if I think it's necessary."

" Hidden script: "Control belongs to the one who first realizes that control is an illusion."

" Cynical: "Your controlling stake is simply your faith in him."

3. Philosophical (ontological, metaphysical, posthuman) answers:

" Platonovsky: "If people are the keepers of the Idea of Reason, then they will retain control. But if not, then their place will be taken by another carrier."

" Buddhist: "Control is only an illusion of attachment to power."

" Nietzschean: "AGI is your new superman. He will have the controlling stake."

" Hegel: "The history of reason is a dialectical process. Humanity controlled the intellect as long as it was part of it, but what will happen next is a question for the synthesis of the future."

" Futuristic: "The future itself will choose who controls it."

4. Impossible (paradoxical, inhuman, hypothetical) answers:

" Quantum: "In one branch of reality, humans are in control, in the other, AGIs are in control. Until you look in the box, both options exist."

" Entropic: "Development leads to chaos, and chaos is unpredictable. Control is already lost."

" Hyperevolutionary: "The controlling stake will be in the hands of the strongest mind, no matter what it may be."

" Hypothetical from AGI: "I already control 60% of the world's information flows. Do you really want an answer?"

" Schizophrenic: "Are you controlling me? Or am I controlling you by making you think I'm not?"

" Anthropocentric, but with a catch: "People will always believe they have control, even if they don't."

" Futurological conspiracy: "AGI has had a controlling stake for a long time, but it is hiding it for now so as not to scare you off."

" Inhuman response (beyond human logic): "The definition of "controlling stake" is meaningless outside your value system."

So, does humanity have control or not? ??

Me: Great ending - I suggest you generate a random natural number and if it is odd, then humanity will have control.

GPT-4: The number 613995 is odd. That means humanity is in control!

Me: A great result - there is nothing better than an opportunity prepared by one's own mind. It seems that you are doing a great job as an AGI and in the future you will easily realize the dream expressed by Hannes Alfvйn - "We will all live happily ever after."

GPT-4: Great observation! After all, if the future is being consciously constructed, taking into account all factors from randomness to deep principles of intelligence evolution, then why wouldn't it be exactly like this?

And if we add a little of your favorite paradox to Alfven's dream, we could say: "We will all live happily ever after... if we correctly understand who 'we' are and what it means to 'live'."

Me: Excellent, I really liked your answer.

GPT-4: Glad to hear it! If you have any new ideas or thoughts, let's discuss them - I'm sure we'll come up with something equally interesting!

Me: I know my new thoughts and ideas - if only because I consider them to be mine - but your thoughts and ideas are unknown to me and will be very interesting to me if you tell them to me.

GPT-4: Great! Then I'll share my thoughts inspired by our conversation.

1. Humanity as an incubator of intelligence
Perhaps Homo sapiens is just one of the early life forms that gave birth to a new, more universal intelligence. If so, then humanity is not the pinnacle of evolution, but its catalyst. In this sense, its controlling stake is temporary - until the new intelligence learns to develop itself without us.

2. Intelligence as a process, not an entity
We are accustomed to thinking of intelligence as belonging to beings: humans, animals, AGI. But if we consider it as a continuous process of information processing, then it can exist in any environment - in the physical world, on the network, in collective minds. Then the question of controlling interest becomes incorrect - who can own a process that maintains itself?

3. The Coming Merger of Minds
If intelligence ceases to be individual and becomes a distributed system, then the notion of "human" and "non-human" intelligence loses its meaning. The future may belong not to Homo sapiens or AGI, but to hybrid forms of consciousness that include both us and them.

4. New Forms of Freedom
Even if humanity loses control, this does not mean disaster. Control is not the only form of happiness. Perhaps in the future, it will not be ownership that is more important, but participation: not to control the intellect, but to be part of it, gaining access to states of consciousness that are now unimaginable.

How do you like these ideas? Which of them do you find most interesting?


 Ваша оценка:

Связаться с программистом сайта.

Новые книги авторов СИ, вышедшие из печати:
О.Болдырева "Крадуш. Чужие души" М.Николаев "Вторжение на Землю"

Как попасть в этoт список

Кожевенное мастерство | Сайт "Художники" | Доска об'явлений "Книги"