Elon Musk Calls to Pause Advanced AI Experiments (GPT-4 & Beyond)

Elon Musk (CEO of Tesla & Twitter) and Apple cofounder Steve Wozniak signed an open letter published by the non-profit Future of Life Institute which calls to “Pause Giant AI Experiments.” (R)

Why? The Future of Life Institute and many others believe that AI labs are locked in an “out-of-control race” to create and deploy machine learning systems that “no one – not even their creators – can understand, predict, or reliably control.”

The letter has over 1,000 signatures at the time of this writing and states the following:

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Notable signatories of the AI moratorium…

Included below are some notable signatories of the Future of Life Institute letter – supporting the pause of giant AI experiments.

 

  • Elon Musk: Tesla, SpaceX, Twitter (CEO)
  • Steve Wozniak: Apple (co-founder)
  • Evan Sharp: Pinterest (co-founder)
  • Chris Larsen: Ripple (co-founder)
  • Jaan Tallinn: Skype (co-founder)
  • Zachary Kenton: DeepMind (research scientist)
  • Gary Marcus: NYU (AI researcher)
  • John J Hopfield: Princeton (inventor of associative neural networks)
  • Alessandro Perilli: Synthetic Work (AI researcher)
  • Daniel Schwarz: Metaculus (CTO)
  • Louis Rosenberg: Unanimous AI (CEO)
  • Andrew Yang: Forward Party (politician)

I just listed a subset of signatories above – but there’s a ridiculous number of high IQ people that signed this letter.

The full list of signatories can be found beneath the Future of Life Institute letter.

Keep in mind that it is unclear as to whether all of the individuals listed as signatories are legitimate – as anyone can complete the form.

Some have reported that Sam Altman (CEO of OpenAI) was added to the signatory list and that this seemed suspicious and of questionable legitimacy.

Future of Life claims that they “may contact you directly via the email you have entered to further verify your signature” – but I’m not sure how they do this (I doubt they require you to submit ID).

Why are Elon Musk & AI researchers calling for a GPT moratorium?

There are a variety of reasons the Future of Life Institute and Elon Musk are calling for a moratorium on powerful AI technology like GPT-4 and beyond.

  1. AI systems have become human-competitive, raising concerns about their impact on jobs, misinformation, and potential loss of control over civilization.
  2. Current planning and management are allegedly insufficient to address the risks associated with advanced AI.
  3. Decisions about AI development should (according to Future of Life) not be delegated to unelected tech leaders but should involve public policy & input.
  4. AI labs should use a 6-month pause to develop shared safety protocols to ensure that future AI systems are safe & well-regulated.
  5. AI research should focus on making existing systems more accurate, interpretable, and trustworthy.
  6. Policymakers & AI developers should collaborate to develop robust AI governance systems, including regulatory authorities, oversight, liability, and funding for technical AI safety research.

Competitive intentions (?)

Elon Musk has expressed dissatisfaction with OpenAI and GPT after transitioning from: (A) non-profit & open-source to (B) for-profit and closed-source.

A recent report noted that Elon Musk was so dissatisfied with what OpenAI has become, that he has been recruiting Igor Babuschkin, a machine learning specialist who recently left Alphabet’s DeepMind AI unit – to create a competing company.

Musk, like many in the AI space, acknowledges OpenAI’s GPT-4 as the current global leader in artificial intelligence.

Although Musk and other AI developers may have genuine concerns about AI development (GPT-4 and beyond), some speculate that the call for a 6-month moratorium could be a strategic business move.

Competitors of OpenAI such as: Google, Baidu, Apple, Meta, and Anthropic might be around 6 months behind GPT-4 in technological capability – and support a moratorium so they have time to catch up.

These competitors could leverage any pause in GPT-4+ development to catch and/or surpass OpenAI’s GPT-4 product – so it would be in their selfish interests to support a development moratorium on the leader (OpenAI GPT-4).

Risks associated with AI like GPT-4 & beyond…

Included below are some well-known risks associated with AI like GPT-4 and beyond.

  • Misinformation & fake news: Advanced AI models can generate convincing fake news articles or misleading information, contributing to the spread of misinformation and eroding trust in legitimate information sources.
  • Deepfakes & manipulated media: AI systems can create realistic fake audio, images, or videos – making it challenging to distinguish between real and synthetic content. This could be exploited for disinformation campaigns, blackmail, or swaying public opinion.
  • Malicious use: Creating spam, phishing emails, automated harassment, and generating propaganda.
  • Ethical concerns & biases: Advanced AI models could unintentionally perpetuate specific biases, stereotypes, and/or harmful content in training data. This may yield offensive outputs and exacerbate social inequality (according to some).
  • Privacy concerns: AI systems may inadvertently memorize and disclose sensitive or personal information present in training data – posing privacy risks.
  • Dangerous inventions: Unhinged AI systems may give evil people (i.e. terrorists) innovative instructions for how to: create unstoppable bioweapons (e.g. super-viruses), build nanobot militias, and/or amass massive amounts of money online illegally (e.g. hacking).
  • Lack of understanding: Advanced AI models may be difficult to understand and interpret – making it difficult to know how they arrive at certain conclusions. Lack of transparency complicates efforts to ensure AI accountability and make informed decisions.
  • Loss of human jobs: AI systems will automate many tasks that previously required human intervention. This may lead to significant job displacement in certain industries and raising concerns about the future of work. Universal basic income discussions may be necessary.
  • Overreliance on AI: The widespread adoption of AI models could lead to an overreliance on the technology. This could be a bad thing if these systems are exploited, manipulated, generate inaccurate information, etc. (For example: Government official makes bad decision because he/she asked GPT-4 what to do).

What happens when the 6 month moratorium ends?

When the proposed 6-month pause on the development of giant AI systems (more powerful than GPT-4) is up, several actions should ideally have taken place (according to some):

AI labs and experts should have collaborated on shared safety protocols for advanced AI design and development.

Progress should’ve been made in improving safety, interpretability, transparency, robustness, alignment, trustworthiness, and loyalty of existing AI systems.

Policymakers and AI developers should’ve established robust AI governance systems, including:

  • Dedicated regulatory authorities
  • Oversight & tracking, provenance & watermarking
  • Auditing & certification
  • Liability frameworks for AI-induced harm
  • Public funding for AI safety research
  • Institutions to address economic & political disruptions from AI

After the 6-month pause, development of powerful AI systems resumes with: safety protocols & governance systems, responsible AI development (AI labs, independent overseers, policymakers), and alignment with societal interests (minimize risks & maximize positive impact).

At least this is what those calling for a moratorium want to happen… whether it actually would happen remains unclear (probably unlikely).

Is there evidence of significant risk from AI’s like GPT-4?

  • How do we know that the Future of Life Institute and Elon Musk are accurate in their assessments of GPT-4 and advanced AI tech?
  • Can they present specific evidence to showcase risks associated with GPT-4?
  • How do they know that current planning and management are insufficient to address advanced AI risk?
  • Why should decisions about AI development involve public policy & input? (Many people don’t even understand how AI works?)
  • How is it possible to force all companies to develop “shared safety protocols”? (What if someone just doesn’t do this?)
  • Why should there be major regulatory oversight & increased funding for AI safety research?

Why an AI moratorium could be a bad idea…

There are many reasons as to why an AI moratorium on the development of GPT-4 and beyond may be a bad idea.

  • Stifling progress & innovation: Some AI experts argue that halting AI development could hinder the benefits and applications of AI in fields such as: healthcare, medicine, education, and food. Improving GPT-4 rapidly may unlock even more potential.
  • Inconsistency among moratorium supporters: Critics have pointed out that several signatories of the moratorium previously argued that large language models (LLMs) are not “real AI.” This shift in opinion indicates inconsistency in their stance and/or dishonesty.
  • Unintended consequences: A moratorium could lead to a future where AI is strictly controlled and managed by surveillance states and military/security contractors, potentially increasing harm rather than mitigating it.
  • Ineffectiveness & non-compliance: It’s highly likely that some organizations and/or countries would disregard a development moratorium – and continue AI development in secret. This scenario would leave others at a disadvantage while failing to address the risks of AI advancement. Countries trying to get a competitive advantage aren’t going to stop AI development because Elon Musk says they should.
  • Brain drain (?): A moratorium might drive talented AI researchers and/or companies to relocate outside of the U.S. to countries with fewer restrictions & regulations. This would lead to a loss of expertise and innovation in nations that adhere to the moratorium.
  • Global division: A moratorium might create divisions between countries and organizations – making it more challenging to establish international agreements and collaborate on AI development, safety, and ethics. (Some agree to the moratorium and others express disagreement, etc.).
  • Overemphasis on risks & underemphasis of benefits (?): Some believe that those calling for a moratorium are overemphasizing risks and underselling the potential benefits. In other words, even if a cost-benefit analysis suggests a massive net benefit – this moratorium might make it appear as though the risks outweigh the benefit.
  • Opportunity cost: Massive amounts of resources (workers, investment, hours, etc.) may end up channeled towards “AI safety” and “AI regulations” – which could’ve otherwise been utilized more efficiently with ongoing development. Essentially there may be way more “red tape” for companies to cut through.

Robin Hanson & AI Development

In an article by Robin Hanson called “AI Risk, Again“:

Hanson states that many people are concerned about risks posed by LLMs like GPT but argues that these concerns are largely unfounded and that its too early to learn much about controlling future AI systems.

Hanson suggests that the world economy will likely transition to AI-dominated growth, coordination and control of AI systems will be difficult (similar to controlling large organizations today).

He acknowledges the possibility of a single AI venture exploding in power and becoming more powerful than the entire world combined, but considers this scenario unlikely – suggesting that the time predicted for the next transition period should be sufficient for standard testing practices to notice alignment issues.

He also suggests that recent anxiety about AI systems is part of a recurring pattern of concern about automation throughout human history – and fears that slowing progress due to vague concerns may have negative consequences (such as its had in nuclear energy).

Lastly, Hanson argues that it’s too early to create effective controls for future AI systems – and that it’s better to continue down the current path – focusing on controls when more concrete issues arise.

How would you enforce an AI moratorium? (China, Russia, North Korea, etc.)

Enforcing an AI moratorium on a global scale would be extremely challenging – probably impossible.

Sure it may be possible to have the U.S. government enforce an AI moratorium on companies based in the U.S. – but this might stifle innovation and cause smart people to leave.

What would stop a high profile team of AI researchers from leaving for a private island or country with very few regulations to continue development outside the U.S.?

Additionally, let’s say all companies in the U.S. agreed to stop AI development – then how would you convince other countries like China, Russia, North Korea, etc. to also stop?

Hypothetically, let’s say all of these countries “agreed” (wink, wink, nod) to stop at some formal international meeting… how would you know that they actually stopped? (Would you just take China’s word for it? LOL.)

Agreement in principle at an international meeting is one thing – ensuring compliance is another (particularly if there is no robust verification process).

Obviously the U.S. has significantly more advanced AI technology than other countries at the moment, but everyone is trying to catch up (via technology theft & non-stop development).

A comprehensive and collaborative approach with: international treaties (clearly defined terms), a global regulatory body to monitor AI research, implementing robust verification mechanisms (for compliance), developing penalties for non-compliance (e.g. sanctions), etc. – might work to some extent, but I’m skeptical.

Still, what happens if a group of people collaborate to create a decentralized GPT-4 – such that there is zero safety or censorship and nobody can be traced or held accountable for any information generated by the AI?

Eliezer Yudkowsky’s solution

An American decision theory and AI researcher & writer who is the co-founder of a private research non-profit called MIRI (Machine Intelligence Research Institute).

He recently wrote an article in Time Magazine called “The Only Way to Deal with the Threat from AI? Shut It Down.”

The quote below is taken from this article (explaining how a global moratorium would be enforced):

Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike. (R)

I still am unsure exactly how it would be possible to track all GPUs created… someone might construct a hidden GPU manufacturing facility.

Also possible that an airstrike of a “GPU cluster” could escalate to a nuclear war… not sure that this is an ideal solution.

Scott Alexander & AI risks…

Scott Alexander (AstralCodexTen) has written several articles contemplating the risks vs. benefits of AI advancement.

  • Planning for AGI & Beyond (R)
  • Why I am not as much of a doomer as some people (R)

Within the AI community, perceived risk of AI-catastrophe varies substantially – some estimating the risk to be as low as 2% and others estimating it to be over 90%.

Scott Aaronson: 2%

Will MacAskill: 3%

Katja Grace survey (machine learning): 5-10%

Paul Christiano: 10-20%

AI alignment worker: 30%

Scott Alexander: 33%

Eli Lifland (forecaster): 35%

Holden Karnofsky: 50%

Eliezer Yudkowsky: >90%

AI optimists: Believe that the risk of a world-killing AI can be prevented by gradually increasing the coherence of intermediate AIs, which will cooperate with humans.

AI pessimists: believe that misaligned AIs could act as “sleeper agents” and sabotage alignment efforts – leading to the creation of a world-killer AI.

AI alignment: If alignment is comprehensible, then the solution can be verified (like calculus). If alignment is incomprehensible, then we are trusting AIs to not be sleeper agents.

AI superweapons: The possibility of superweapons like nanotechnology plays a role in the risk of unaligned AI. If nanobots develop easily, there could be a short window between aligned AI and world-killers.

AI progress: Gradual progress has been observed so far, but a sudden leap in intelligence could leave humanity unprepared. At this point, catching sleeper agents may be difficult or impossible to do effectively.

AI “fire alarm”: A significant dangerous AI could prompt a strong international response and shift focus towards collaborative safety measures. This would only be beneficial if it happened far enough in advance of a misaligned superintelligence (difficult to control once created).

Alexander states that OpenAI’s intentions in the advancement of their AI are not entirely clear – as they may be motivated by: (1) corporate interests; (2) genuine safety concerns; or (3) some combination of both.

Arguments in support of AI acceleration with OpenAI, include:

  1. Maintaining a lead: A company like OpenAI can secure a lead in the field, allowing them to dictate safety standards and practices. Slowing AI development now would allow “bad actors” to catch up. It’s better to maintain a lead and focus on alignment later when more advanced AGIs are available.
  2. Computing power: Accelerating AI developing now might bring us closer to optimal algorithms, making computing power the limiting factor in continued development. By pushing computing power limits now, it may be easier to notice and halt the development of rogue AI later on (as it will require significant computing power).

Based on everything mentioned by Alexander, a moratorium could be beneficial in allowing for more time in alignment research and safety precautions – but the decision is not straightforward (as there could be downsides to slowing progress).

Critical thinking about AI development…

Included below are some questions to ask when thinking about AI development & possible bans or moratoriums.

  • Is it possible that human extinction is more likely if AI development is paused than if it continues full speed ahead? (Example: AI builds a system to detect & prevent a giant asteroid from hitting Earth vs. AI too slow and never achieves this)
  • How much has been achieved in AI alignment over the past 2 decades?
  • Hypothetically, let’s say there’s a moratorium on AI development but dark actors slip through and continue anyways.
  • Do you think humans can actually “solve alignment”? (Some don’t think this is possible)
  • Does alignment actually need to be solved? (Some think it doesn’t unless we start developing robots with egos)
  • Is there a convincing answer as to why AIs (built on human data & ideas) have to be “aligned”? (Why don’t humans have to be aligned?)
  • Do you think the government and/or average people (who know almost nothing about AI) should be involved in making decisions about AI development?

My thoughts on an AI moratorium…

I do NOT support an AI moratorium at the moment simply because: (1) I believe the benefits of continued development outweigh the risks (e.g. AI prevents pandemics, asteroids, etc.) AND (2) there is no compelling evidence (at the moment) that indicates significant risk associated with advanced AIs (e.g. a nefarious takeover).

That said, I am not a person who believes there is zero risk associated with AI… I think there are serious risks (including human extinction) associated with AI tech (GPT-4 & beyond) – but the AI will require humans (at least initially) to advance its agenda (if dangerous).

Nonetheless, I think OpenAI and others have done well in risk mitigation – and I think it’s better to have a U.S.-based company like OpenAI maintain a lead in AI tech than to have a company or rogue entity from another country surpass them in development.

If guided properly, I suspect that GPT-4, GPT-5, and beyond – can be used to actively mitigate risks attached to competing AIs with nefarious intentions (e.g. GPT detects & anticipates risks, makes recommendations to counteract these risks, etc.).

I believe that OpenAI should sustain its aggressive development because risks seem to be appropriately managed (inbuilt risk management might even improve significantly with subsequent models) – and they seem to have created a “good” AI system (which may help identify & neutralize dangers from other AIs).

Musk and many tech gurus correct about AI risks (including from GPT-4 and beyond)… but there are risks attached to nearly everything… and there are risks of not progressing as well.

Lastly, I suspect there may be some amount of selfish motivation in calling for a development pause particularly from those who are “working in AI” (they want to catch OpenAI).

What do you think about the call for an AI moratorium?

Do you think that giant AI experiments like GPT-4 should be paused? (Why?)

Do you believe a 6-month pause will actually make a significant difference in safety? (Why?)

What do you think are the most compelling reason(s) to pause AI development?

What steps do you think should be taken to mitigate risks associated with AI development? (If any)

What role (if any) should governments & regulatory bodies play in AI development & safety?

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.