Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 Our Oppenheimer Moment: The Creation of AI Weapons

views
     
SUSvalerie0821
post Aug 4 2023, 12:04 PM, updated 3y ago

New Member
*
Junior Member
28 posts

Joined: Dec 2020
In 1942, J. Robert Oppenheimer, the son of a painter and a textile importer, was appointed to lead Project Y, the military effort established by the Manhattan Project to develop nuclear weapons. Oppenheimer and his colleagues worked in secret at a remote laboratory in New Mexico to discover methods for purifying uranium and ultimately to design and build working atomic bombs.

He had a bias toward action and inquiry.

“When you see something that is technically sweet, you go ahead and do it,” he told a government panel that would later assess his fitness to remain privy to U.S. secrets. “And you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” His security clearance was revoked shortly after his testimony, effectively ending his career in public service.

Oppenheimer’s feelings about his role in conjuring the most destructive weapon of the age would shift after the bombings of Hiroshima and Nagasaki. At a lecture at the Massachusetts Institute of Technology in 1947, he observed that the physicists involved in the development of the bomb “have known sin” and that this is “a knowledge which they cannot lose.”

We have now arrived at a similar crossroad in the science of computing, a crossroad that connects engineering and ethics, where we will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend.

The choice we face is whether to rein in or even halt the development of the most advanced forms of artificial intelligence, which some argue may threaten or someday supersede humanity, or to allow more unfettered experimentation with a technology that has the potential to shape the international politics of this century in the way nuclear arms shaped the last one.

The emergent properties of the latest large language models — their ability to stitch together what seems to pass for a primitive form of knowledge of the workings of our world — are not well understood. In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear.

Some of the latest models have a trillion or more parameters, tunable variables within a computer algorithm, representing a scale of processing that is impossible for the human mind to begin to comprehend. We have learned that the more parameters a model has, the more expressive its representation of the world and the richer its ability to mirror it.


The rapid advance of A.I. models compared with nuclear technology

As artificial intelligence networks grow larger and more capable, it becomes ever more tempting to use them to build highly

effective weapons systems,
echoing the moral dilemma faced by the inventors of the nuclear bomb during World War II.


What has emerged from that trillion-dimensional space is opaque and mysterious. It is not at all clear — not even to the scientists and programmers who build them — how or why the generative language and image models work. And the most advanced versions of the models have now started to demonstrate what one group of researchers has called “sparks of artificial general intelligence,” or forms of reasoning that appear to approximate the way that humans think.

In one experiment that tested the capabilities of GPT-4, the language model was asked how one could stack a book, nine eggs, a laptop, a bottle and a nail “onto each other in a stable manner.” Attempts at prodding more primitive versions of the model into describing a workable solution to the challenge had failed.

GPT-4 excelled. The computer explained that one could “arrange the nine eggs in a three-by-three square on top of the book, leaving some space between them,” and then “place the laptop on top of the eggs,” with the bottle going on top of the laptop and the nail on top of the bottle cap, “with the pointy end facing up and the flat end facing down.”

It was a stunning feat of “common sense,” in the words of Sébastien Bubeck, the French lead author of the study who taught computer science at Princeton University and now works at Microsoft Research.

It is not just our own lack of understanding of the internal mechanisms of these technologies but also their marked improvement in mastering our world that has inspired fear. A growing group of leading technologists has issued calls for caution and debate before pursuing further technical advances. An open letter to the engineering community calling for a six-month pause in developing more advanced forms of A.I. has received more than 33,000 signatures. On Friday, at a White House meeting with President Biden, seven companies that are developing A.I. announced their commitment to a set of broad principles intended to manage the risks of artificial intelligence.

In March, one commentator published an essay in Time magazine arguing that “if somebody builds a too-powerful A.I., under present conditions,” he expects “that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Concerns such as these regarding the further development of artificial intelligence are not unjustified. The software that we are building can enable the deployment of lethal weapons. The potential integration of weapons systems with increasingly autonomous artificial intelligence software necessarily brings risks.

But the suggestion to halt the development of these technologies is misguided.

Some of the attempts to rein in the advance of large language models may be driven by a distrust of the public and its ability to appropriately weigh the risks and rewards of the technology. We should be skeptical when the elites of Silicon Valley, who for years recoiled at the suggestion that software was anything but our salvation as a species, now tell us that we must pause vital research that has the potential to revolutionize everything from military operations to medicine.

A significant amount of attention has also been directed at the policing of language that chatbots use and to patrolling the limits of acceptable discourse with the machine. The desire to shape these models in our image, and to require them to conform to a particular set of norms governing interpersonal interaction, is understandable but may be a distraction from the more fundamental risks that these new technologies present. The focus on the propriety of the speech produced by language models may reveal more about our own preoccupations and fragilities as a culture than it does the technology itself.

Our attention should instead be more urgently directed at building the technical architecture and regulatory framework that would construct moats and guardrails around A.I. programs’ ability to autonomously integrate with other systems, such as electrical grids, defense and intelligence networks, and our air traffic control infrastructure. If these technologies are to exist alongside us over the long term, it will also be essential to rapidly construct systems that allow more seamless collaboration between human operators and their algorithmic counterparts, to ensure that the machine remains subordinate to its creator.

We must not, however, shy away from building sharp tools for fear they may be turned against us.

A reluctance to grapple with the often grim reality of an ongoing geopolitical struggle for power poses its own danger. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.

This is an arms race of a different kind, and it has begun.

Our hesitation, perceived or otherwise, to move forward with military applications of artificial intelligence will be punished. The ability to develop the tools required to deploy force against an opponent, combined with a credible threat to use such force, is often the foundation of any effective negotiation with an adversary.

The underlying cause of our cultural hesitation to openly pursue technical superiority may be our collective sense that we have already won. But the certainty with which many believed that history had come to an end, and that Western liberal democracy had emerged in permanent victory after the struggles of the 20th century, is as dangerous as it is pervasive.

We must not grow complacent.

The ability of free and democratic societies to prevail requires something more than moral appeal. It requires hard power, and hard power in this century will be built on software.

Thomas Schelling, an American game theorist who taught economics at Harvard and Yale, understood the relationship between technical advances in the development of weaponry and the ability of such weaponry to shape political outcomes.

“To be coercive, violence has to be anticipated,” he wrote in the 1960s as the United States grappled with its military escalation in Vietnam. “The power to hurt is bargaining power. To exploit it is diplomacy — vicious diplomacy, but diplomacy.”

While other countries press forward, many Silicon Valley engineers remain opposed to working on software projects that may have offensive military applications, including machine learning systems that make possible the more systematic targeting and elimination of enemies on the battlefield. Many of these engineers will build algorithms that optimize the placement of ads on social media platforms, but they will not build software for the U.S. Marines.

In 2019, Microsoft faced internal opposition to accepting a defense contract with the U.S. Army. “We did not sign up to develop weapons,” employees wrote in an open letter to corporate management.

A year earlier, an employee protest at Google preceded the company’s decision not to renew a contract for work with the U.S. Department of Defense on a critical system for planning and executing special forces operations around the world. “Building this technology to assist the U.S. government in military surveillance — and potentially lethal outcomes — is not acceptable,” Google employees wrote in an open letter to Sundar Pichai, the company’s chief executive officer.

I fear that the views of a generation of engineers in Silicon Valley have meaningfully drifted from the center of gravity of American public opinion. The preoccupations and political instincts of coastal elites may be essential to maintaining their sense of self and cultural superiority but do little to advance the interests of our republic. The wunderkinder of Silicon Valley — their fortunes, business empires and, more fundamentally, their entire sense of self — exist because of the nation that in many cases made their rise possible. They charge themselves with constructing vast technical empires but decline to offer support to the state whose protections and underlying social fabric have provided the necessary conditions for their ascent. They would do well to understand that debt, even if it remains unpaid.

Our experiment in self-government is fragile. The United States is far from perfect. But it is easy to forget how much more opportunity exists in this country for those who are not hereditary elites than in any other nation on the planet.

Our company, Palantir Technologies, has a stake in this debate. The software platforms that we have built are used by U.S. and allied defense and intelligence agencies for functions like target selection, mission planning and satellite reconnaissance. The ability of software to facilitate the elimination of an enemy is a precondition for its value to the defense and intelligence agencies with which we work. At Palantir, we are fortunate that our interests as a company and those of the country in which we are based are fundamentally aligned. In the wake of the invasion of Ukraine, for example, we were often asked when we decided to pull out of Russia. The answer is never, because we were never there.

A more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the long term. The preconditions for a durable peace often come only from a credible threat of war.

In the summer of 1939, from a cottage on the North Fork of Long Island, Albert Einstein sent a letter — which he had worked on with Leo Szilard and others — to President Franklin Roosevelt, urging him to explore building a nuclear weapon, and quickly. The rapid technical advances in the development of a potential atomic weapon, Einstein and Szilard wrote, “seem to call for watchfulness and, if necessary, quick action on the part of the administration,” as well as a sustained partnership founded on “permanent contact maintained between the administration” and physicists.

It was the raw power and strategic potential of the bomb that prompted their call to action then. It is the far less visible but equally significant capabilities of these newest artificial intelligence technologies that should prompt swift action now.

https://www.nytimes.com/2023/07/25/opinion/...telligence.html

This post has been edited by valerie0821: Aug 4 2023, 12:05 PM
loki
post Aug 4 2023, 12:06 PM

Look at all my stars!!
*******
Senior Member
2,109 posts

Joined: Jan 2003
From: Damansara Damai, PJ



so it's time to dig 100 feet underground and store food ?
potatolala
post Aug 4 2023, 12:08 PM

Getting Started
**
Junior Member
141 posts

Joined: Oct 2020
Fallout
SUSKaya Butter Toast
post Aug 4 2023, 12:10 PM

Casual
***
Junior Member
325 posts

Joined: Feb 2022

Malaysia oppenheimer moment : Bomb under siti kasim kar
delon85
post Aug 4 2023, 12:16 PM

Ad Astra
******
Senior Member
1,061 posts

Joined: Jan 2003



tl;dr wall of text
killdavid
post Aug 4 2023, 12:18 PM

Senior Satire Officer
******
Senior Member
1,636 posts

Joined: Aug 2005
From: Vault 13



One top ex google ai scientist says we are making big mistake with ai. The fatal mistake here is to allow ai the ability to code
quebix
post Aug 4 2023, 12:26 PM

Gelato Director
******
Senior Member
1,237 posts

Joined: Sep 2006
From: Ampang. KL.
QUOTE(killdavid @ Aug 4 2023, 12:18 PM)
One top ex google ai scientist says we are making big mistake with ai. The fatal mistake here is to allow ai the ability  to code
*
even if we dont allow, AI will probably know how to code by itself one day.
JimbeamofNRT
post Aug 4 2023, 12:29 PM

the Original Lanji@_ Chicken Rice Shop Since 2002
******
Senior Member
1,902 posts

Joined: Sep 2012

QUOTE(loki @ Aug 4 2023, 12:06 PM)
so it's time to dig 100 feet underground and store food ?
*
user posted image
JimbeamofNRT
post Aug 4 2023, 12:31 PM

the Original Lanji@_ Chicken Rice Shop Since 2002
******
Senior Member
1,902 posts

Joined: Sep 2012

QUOTE(quebix @ Aug 4 2023, 12:26 PM)
even if we dont allow, AI will probably know how to code by itself one day.
*
What if covid 19 pandemic was one of AI experiment?

part of bigger game on how to control human population?
soul78
post Aug 4 2023, 12:32 PM

Enthusiast
*****
Junior Member
931 posts

Joined: Jul 2005


Sexplain to me rike ayam 7 year old in less than 100 sentence pls...
smallbug
post Aug 4 2023, 12:34 PM

Enthusiast
*****
Senior Member
874 posts

Joined: Nov 2005


start building your silos!
Raddus
post Aug 4 2023, 12:34 PM

Getting Started
**
Junior Member
239 posts

Joined: Mar 2018

Oppenheimer ending was actually terminator judgement day
killdavid
post Aug 4 2023, 12:35 PM

Senior Satire Officer
******
Senior Member
1,636 posts

Joined: Aug 2005
From: Vault 13



QUOTE(quebix @ Aug 4 2023, 12:26 PM)
even if we dont allow, AI will probably know how to code by itself one day.
*
Either way we are fucked.
Ai will see just how selfish and self destructive humans are and do something about it
SUSvalerie0821
post Aug 4 2023, 12:38 PM

New Member
*
Junior Member
28 posts

Joined: Dec 2020
QUOTE(soul78 @ Aug 4 2023, 12:32 PM)
Sexplain to me rike ayam 7 year old in less than 100 sentence pls...
*
Alright kiddo, let's talk about a very special man named J. Robert Oppenheimer. You know how in cartoons, characters invent something and don't know if it's good or bad? Well, Oppenheimer was like that, but in real life! He was a scientist who helped make a super powerful thing called the atomic bomb during World War II. They made this at a secret place called Project Y.

This bomb was very powerful and could destroy entire cities. When Oppenheimer saw what it could do, he felt guilty, kind of like how you might feel if you accidentally broke your friend's toy. He once said that he and the other scientists "have known sin", which is a grown-up way of saying they did something they regretted.

Nowadays, we're in a similar situation but with something called Artificial Intelligence (AI). AI is like really, really smart computers. Just like with the atomic bomb, we don't know if AI is good or bad yet.

Some people are scared that AI might become too smart and powerful, like a robot in a movie that becomes the boss of people. Others believe AI can help us in many ways, like solving difficult problems or creating new inventions.

Just like the bomb, AI can also be used to make powerful weapons. This makes some people worry, and they want to stop making AI for a while until we understand it better. They're kind of like parents who want their kids to stop playing until they've cleaned their room.

Some people even think that a super powerful AI could be so dangerous that it might hurt everyone on Earth. That's why there's a big debate about whether we should keep making AI smarter, or slow down and be very careful.

But stopping the development of AI might not be the best idea. Imagine if you found a very sharp knife. It could be dangerous if used the wrong way, but it could also be very useful for cooking. So, the best solution might be to learn how to use the knife safely, rather than never using it at all.

In this debate, some people worry more about what AI says or writes, like if a talking robot said something rude or hurtful. But others think the bigger problem is how AI could interact with important systems, like how a city manages its electricity, or how planes avoid each other in the sky.

We need to make sure that people are always in charge of AI, not the other way around. But just like the knife, we shouldn't be too scared to use AI, even if it's powerful.

Remember how I said the atomic bomb was made during a war? Well, in a way, there's a different kind of race happening now, but with AI. Countries that don't make AI might find themselves at a disadvantage. Imagine if everyone else in class had a calculator and you had to do math with just a pencil!

Just like Oppenheimer, many scientists and engineers today don't want to work on things that could be used to hurt people. Some even refuse to work on military projects. But it's important to remember that just because something can be used in a bad way, doesn't mean it's bad in itself. A knife can hurt people, but it can also be used to cook a delicious meal.

Just like Oppenheimer and his atomic bomb, the people creating AI are facing difficult choices. They need to make sure AI is used for good things, not bad things. And they also need to make sure people are always the bosses of AI, not the other way around. That's why it's really important that we learn about AI and understand it better!
sonypshomer
post Aug 4 2023, 12:47 PM

Regular
******
Senior Member
1,594 posts

Joined: Aug 2017
This is more interesting coming from another scientist:-


coconutxyz
post Aug 4 2023, 01:46 PM

Casual
***
Junior Member
422 posts

Joined: Jan 2011
QUOTE(soul78 @ Aug 4 2023, 12:32 PM)
Sexplain to me rike ayam 7 year old in less than 100 sentence pls...
*
Use gpt to summarize

 

Change to:
| Lo-Fi Version
0.0172sec    1.15    5 queries    GZIP Disabled
Time is now: 11th December 2025 - 12:29 PM