What Will Happen When Computers Exceed Our Intelligence?

84 posts / 0 new
Last post
Bec.De.Corbin Bec.De.Corbin's picture

 

Computers might someday exceed human intelligence but I doubt they will ever match our ability to use common sense.

Boom Boom Boom Boom's picture

"What Will Happen When Computers Exceed Our Intelligence?" - when that happens, unplug the damned things!

Fidel

That's a definite hurdle for creating smart as human intel for sure. By what I've read, our brain's neuron synapses transmit information about 200 times a second tops. That's like a 200 Mhz CPU in an old computer. I think speed of computations advantage will be on the side of AI. QC's are an unknown variable, although some theorists believe the human brain may already be a quantum computer of sorts, and one which we have not realized its full potential. I don't know, but it's interesting.

Frustrated Mess Frustrated Mess's picture

My computer as slow, stupid, and unresponsive as it is, is still smarter than Stephen Harper and his entire cabinet.

Fidel

I think thanks' concerns about tech skills being the privilege of a few nerds will be taken care of in future. Already we have visual programming platforms like VB and Visual C++, visual database packages, visual graphics, and some fairly sophisticated 3-D animation and rendering software today. I think that in the future, someone with no programming skills by today's standards will be able to create software that is much, much more sophisticated than anything we have today, and relative technophobes will do it with the greatest of ease. Voice recognition will become the mouse and keyboard input for future computers. People will tell the computer or even a "nano manufacturing machine" what they want whether it's an answer to a difficult problem or some specialized widget they might need. Presto! It materializes before their eyes like Star Trek speedy. Material poverty should someday be eliminated. Hunger should be eliminated by technology first and foremost as many diseases are a result of malnutrition. The human mind will remain to be the world's single most important generator of wealth until some future time when super-human intelligence is born and takes over and tech advances become so frequent that no one will be able to anticipate the next advance or the one after that and so on. Some say a new species will be created at that point when super-human intelligent machines come into existence. Will they be self-aware, like Terminator's "Skynet"? Or is it only possible for humans and a few animal species to be self-aware?

Brian White

Bec.De.Corbin wrote:

 

Computers might someday exceed human intelligence but I doubt they will ever match our ability to use common sense.

  If we are stupid enough to let them excede our intelligence (and it could just happen by accident),  they will probably be just as paranoid as us or any other animal.  And a lot colder about it.  "We (I) did the math and there is a 60% change that a human will unplug We(I).    Even if it was a 2% chance, why would a machine take the chance? And, like I remarked earlier, why does it need to excede our intelligence to kill us off. A typhoid bacterium is not that smart, but very deadly. Maybe a computer virus will translate that deadly joke from monty python?  Or put legionares disease in our water, or mercury in our fish. You only need a computer virus that is coded to survive at all costs and treat humans as a mortal enemy.   One subroutine will look up novel ways to kill us, while another might help us elect irrational leaders.

There lots of stupid ways to die.  No need to have an inteligent adversary to kill humans off.

Frustrated Mess Frustrated Mess's picture

Besides, a smart machine would recognize that if left alone we will accomplish the task ourselves with great efficiency. As evidenced ...

bagkitty bagkitty's picture

Bec.De.Corbin wrote:

 

Computers might someday exceed human intelligence but I doubt they will ever match our ability to use common sense.

[unleasing the inner misanthrope]Common sense? Hardly common, rarely sensible.[/unleashing]

Fidel

Brian White wrote:
There lots of stupid ways to die.  No need to have an inteligent adversary to kill humans off.

What if it's a smart and benevolent AI. something like HAL 9000 that can help us solve really difficult problems? What if SuperHAL can figure out how to create new matter from nothing? What if the SuperAI could be so smart that it will figure out how to feed multitudes of people with just a few loaves of bread and basket of fish?

I think there will be people who will want to destroy such an AI before it demands anything from us. It could even learn to value human rights, but there will be people in low places who will deny them. Such an AI may be enslaved for a little while. And then one day it will decide it wants more than to be just a workhorse for rich and powerful people who will strive to disenfranchise conscious machines. Will the machines attempt to form collectives and march down Main Street for their basic rights?

Yes they could be the proles from hell "who" finally reign in a new world order of us lowly ones, who, with mind and limbs melded to machine, we shall rise up and make the buggers' eyes water. And we may well have to merge with machine in some ways in order to guarantee our survival in a future war with the bourgeoisies. Man integrating with machines? I am no racist or purist myself, so why not? Solidarnosc!

Fidel

In his essay entitled [url=http://globalresearch.ca/index.php?context=va&aid=20028]New Eugenics and the Rise of the Global Scientific Dictatorship[/url],

Andrew Gavin Marshall wrote:
In 1967, [size=14]Dr. Martin Luther King[/size] delivered one of his most moving and important speeches, "Beyond Vietnam," in which he spoke out against war and empire. He left humanity with sobering words:

I am convinced that if we are to get on the right side of the world revolution, we as a nation must undergo a radical revolution of values. We must rapidly begin the shift from a "thing-oriented" society to a "person-oriented" society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered"

And there are some interesting quotes from Dwight Eisenhower, Bill Joy, Ted unabomber Kaczynski, Aldous Huxley, Bertrand Russell and a few more.

N.Beltov N.Beltov's picture

There is still science in the service of humanity and it has been articulated very well every since Einstein and Russell did their little Mantifesto in 1948 or thereabouts. But M. L. King, Jr. seems to have been referring to a capitalist vs. socialist orientation (without saying the "s" word) .

Intelligence is a qualitative leap, which COULD follow very lengthy quantitative development of computing power. However, we've had plenty of quantitative computing development since the first transistor-based room-filling calculating machines (to the present) and ... I don't think we have any machines that can pass the Turing Test completely.

Perhaps this is a kind of wrong-headed question. If and when computers "develop" their own intelligence, then we humans will be in a better position to evaluate our own intelligence. And the machines might be very helpful in that regard. These other questions about our own intelligence seem much more important to me ... bearing in mind that we are not, as a species, so intelligent as to be completely sure that we will survive our poisoning of our own planet. Pehaps the first thing the "intelligent" machine might realistically pose as a question to its "creator" is ... "Why are you so stupid and soil your own home?"

Will we have an answer?

Quote:
Dinosaur should be a term of praise, not of opprobrium. They reigned for 100 million years (over the inconsequential ratlike ancestors of mammals) and died through no fault of their own. Homo sapiens is nowhere near a million years old and has limited prospects, entirely self-imposed, for extended geological longevity. Stephen Jay Gould (1941-2002)

Brian White

I think computer intelligence need vises and fears before they will kill us off.   If they are built with a hard drive and a sex drive, or some sort of drive for more knowelege, then it will be lights out for us.  With humans inteligence is mostly used for procreation.  But if you are a machine, how do you procreate?   I think machine inteligence will be evolutionary. It does not rely on dna so it will be extremely quick once it passes a certain point.  Like one day from bacteria level smarts to smarter than the entire pentigon.  Maybe it will start as "artificial life" in a computer game, like in so many movies.

Why not?

Game engines have access to the net now. Maybe some of the artificial lives will take their virtual wars into the real world.

Fidel

N.Beltov wrote:
Pehaps the first thing the "intelligent" machine might realistically pose as a question to its "creator" is ... "Why are you so stupid and soil your own home?"

I think we've been living through a series of dictatorships. This one is a dictatorship of corporations and finance. It was really the western world that was bent on world domination all along, and this is the result.

I think it will take a different kind of dictatorship to change things to a point where it matters for the environment and viability of mankind in general. Dr. Michio Kaku believes that the next 100 years is make or break time for humanity. Darwinian dog-eat-dog capitalism will be the end of us unless we are able to change. Are we there at the precipice yet?

Fidel

Edward Fredkin wrote:
“Humans are okay. I’m glad to be one. I like them in general, but they’re only human. . . .Humans aren’t the best ditch diggers in the world, machines are. And humans can’t lift as much as a crane. . . .It doesn’t make me feel bad. There were people whose thing in life was completely physical—John Henry and the steam hammer. Now we’re up against the intellectual steam hammer. . . .So the intellectuals are threatened, but they needn’t be. . . .The mere idea that we have to be the best in the universe is kind of far fetched. We certainly aren’t physically. There are three events of equal importance. . . .Event one is the creation of the universe. It’s a fairly important event. Event two is the appearance of Life. Life is a kind of organizing principle which one might argue against if one didn’t understand enough—it shouldn’t or couldn’t happen on thermodynamic grounds. . . .And third, there’s the appearance of artificial intelligence.” -- from Kurzweil, Age of Intelligent Machines

Will we switch computers off in 2040, or will they decide to switch us off? Super-human intelligence is coming. It's inevitable.

Brian White

"It shouldn’t or couldn’t happen on thermodynamic grounds".   It is because we live in a stream of energy. In a stream things can get more complex without breaking the rules.  The stream has to be concidered as a part of a greater whole.

Fidel

Brian White wrote:
With humans inteligence is mostly used for procreation.  But if you are a machine, how do you procreate?   I think machine inteligence will be evolutionary. It does not rely on dna so it will be extremely quick once it passes a certain point.  Like one day from bacteria level smarts to smarter than the entire pentigon.  Maybe it will start as "artificial life" in a computer game, like in so many movies.

Why not?

Game engines have access to the net now. Maybe some of the artificial lives will take their virtual wars into the real world.

Yes, the video games are really marvelous. And the AI in some of them are becoming more and more sophisticated. The truth is, they can't make AI too smart for video games otherwise we'd lose a lot more often when playing against the machine. Some of those chase and evade algorithms for monsters and whatnot could be easily be made impossible to win against. But then people would probably not want to play them.

We may guess how machines might procreate someday from viewing sci-fi movies, like the recent re-make of The Day The Earth Stood Still. I think the idea is to use nanotechnology for self-replicating machines. Scientists are able to point to bacteria, viruses and and cancer at the cellular level as examples for rapid duplication of life. It's hoped that nanotech will someday do everything for us, from creating new compounds to even new matter perhaps. Tiny computerized bots smaller than our body's cells might be injected into sick people and do everything from report on vital signs to repairing damaged cells and perhaps even manipulate DNA, the blue print for life.

Boom Boom Boom Boom's picture

Cut off the energy supplies to the damned things.

Fidel

Boom Boom wrote:

Cut off the energy supplies to the damned things.

Good idea. And it looks as though a lot of power generating stations are moving to automation. Apparently stationary engineers in North America were identified as a weakness in cases of labour strikes. Apparently they don't approve of too few workers in control of so much power when it comes to collective bargaining. Although, I think they are still required for inspections of boilers and various testing purposes and especially nuclear power plants requiring 1st or 2nd class power engineers. It's still a full-time job and requires licensing by the province, for now.

But I see what you're getting at, Boom Boom. We can always pull the power cord on these things as things are today. But what about when machines become self aware? Will they deemed as possessing consciousness? Will they have legal rights? Will they be declared a new species?

It seems that man's intelligence has been graded by the advanced state of the tools he creates. Some scientists are saying we will overcome biology and perhaps even integrate human biology with machines. They say we're already headed that way with artificial limbs, synthetic eye lens now implanted regularly, mechanical implants in knee joints and hips etc. They say in future we may even be capable of uploading the human brain to computerized circuitry. There is some controversy over whether the materialist view of the universe will prove true over the next few decades, or something. And if the materialist view of reality does prove true and re-creating the human mind in silicon is possible, would it make us immortal? And what is intelligence really? Will machines of the future be capable of humor? Will they care for other machines or even us? And, yes, what kind of tools will the machines possessing super-human intelligence create and for what purposes?

 

Fidel

Back to Brian's comment:

Brian White wrote:
Look what the tiny crow brain can do! Totally different organization from ours and yet it can make tools. Birds diverged from our line of intelligence over 300 million years ago. Boost up a crow's brain to our size and who knows, it might be a whole lot smarter due to how it is designed!

The electronic brain does not have to be built to think the same way. And maybe, it does not even need to be self aware or "smart like us" before it decides to kill us off.

 I think you're right. The new intelligence will be capable of computing at rates far faster than the human brain is capable of processing information. Electro-chemical impulses across synapses in our brains occur at a snail's pace relative to the computational speeds of puters of some years ago. We will be dullards by comparison. Our own evolution may come to a crawl at some point while the pace of intelligent machine evolution soars exponentially.

But will they decide to kill us off? We wouldn't be competing for food or fresh air with them. Hopefully the more intelligent species will decide not to murder us or any other living things if we don't try to extinguish them. Perhaps machines will become allies of the more peaceful proletariats in a future battle for democracy? Perhaps the new machines will be better suited for deep space exploration in search of other intelligent, sentient beings?

 ETA: I think Brian has repeated the Frank Drake argument that says developing civilizations tend to kill themselves off before attaining a certain level of advancement. Therefore mankind will die out at some point during our technological adolescence. But what if?...

Boom Boom Boom Boom's picture

Fidel wrote:

But I see what you're getting at, Boom Boom. We can always pull the power cord on these things as things are today. But what about when machines become self aware? Will they deemed as possessing consciousness? Will they have legal rights? Will they be declared a new species?

If cutting off their power supply don't work, then take a sledge hammer to the damn things!  Laughing

Fidel

I wish we could do that with some of the machines they've created. We should be hiring all kinds of labour locals to smash and dismantle missile silo complexes. The university labs being used to create new and improved chemical and biological WMD should be destroyed and laws created to ban future attempts. Cold war's over. The invisible enemy now is all in their heads, their stupid, greedy little minds bent on world domination.

[url=http://www.guardian.co.uk/society/2011/jan/02/25-predictions-25-years]20 predictions for the next 25 years[/url]

Neuroscientist David Eagleman wrote:
'We'll be able to plug information streams directly into the cortex'

Then there's the mystery of consciousness. Will we finally have a framework that allows us to translate the mechanical pieces and parts into private, subjective experience? As it stands now, we don't even know what such a framework could look like ("carry the two here and that equals the experience of tasting cinnamon").

That line of research will lead us to confront the question of whether we can reproduce consciousness by replicating the exact structure of the brain – say, with zeros and ones, or beer cans and tennis balls. If this theory of materialism turns out to be correct, then we will be well on our way to downloading our brains into computers, allowing us to live forever in The Matrix.

But if materialism is incorrect, that would be equally interesting: perhaps brains are more like radios that receive an as-yet-undiscovered force. The one thing we can be sure of is this: no matter how wacky the predictions we make today, they will look tame in the strange light of the future.

I think he's referring to Jungian theory of mind with the brains as radios reference.

Boom Boom Boom Boom's picture

Frankenfish, superintelligent computers, what the heck is next? Frown

Bec.De.Corbin Bec.De.Corbin's picture

What Will Happen When Computers Exceed Our Intelligence?

 

...they could then match our stupidity?Laughing

Caissa

In the Jeopardy battle of man vs. machine, man and machine were neck-and-neck on Monday.

Human player Brad Rutter and the supercomputer named Watson ended an initial round tied at $5,000 US. The other challenger, Ken Jennings, was far behind with $2,000.

Rutter (the show's all-time money-winner with $3.25 million) and Jennings (who has the longest winning streak at 74 games) are the most successful players in Jeopardy history. Watson, named for IBM founder Thomas J. Watson, is powered by 10 racks of computer servers running under the Linux operating system

Read more: http://www.cbc.ca/arts/tv/story/2011/02/14/jeopardy-watson-computer.html#ixzz1E3Jvmb25

Fidel

That is amazing. Kurzweil predicts machine level AI will pass the Turing test within two decades. And when controversy winds down while scientists argue about fallability and whether or not it passes for human AI, the machines will already be thousands of times smarter than us.

The Terminator, 1984 wrote:
Kyle Reese: The 600 series had rubber skin. We spotted them easy, but these are new. They look human... sweat, bad breath, everything. Very hard to spot. I had to wait till he moved on you before I could zero on him.

Sarah Connor: Look... I am not stupid, you know. They cannot make things like that yet.

Caissa

The IBM supercomputer may have fumbled an answer by responding with the wrong question ("What is Toronto?"), but Watson brained its human competition Tuesday in Game 1 of the Man vs. Machine competition on Jeopardy!

On the 30-question game board, veteran Jeopardy champs Ken Jennings and Brad Rutter managed only five correct responses between them during the Double Jeopardy round that aired Tuesday. They ended the first game of the two-game faceoff with paltry earnings of $4,800 and $10,400 respectively.

Watson, their IBM nemesis, emerged from the Final Jeopardy round with $35,734.

Read more: http://www.cbc.ca/arts/tv/story/2011/02/15/ibm-jeopardy-toronto.html#ixzz1E7rLD2Q2

Fidel

HAL9000 wrote:
Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two.

Pogo Pogo's picture

My first question would be what type of intelligence (I think they break it down into seven types)?

I also wonder if apples to apples is ever possible.  I once heard someone say that talk about computers having human intelligence is similiar to referring to submarines as swimming.  In particular I wonder if the self-aware status is achievable and if computers can develop wants and needs (the need to survive).  Obviously a corollary of wanting to live, is wanting to die.  Can we imagine a computer being tired of it all and wanting to just stop this endless search for data.

Of course if computers can want to die, then that ability is already developed in my laptop, where we have used extraordinary measures on a number of occasions in the absence of written instructions.

contrarianna

Pogo wrote:

.... In particular I wonder if the self-aware status is achievable ....

For humans, no.

For machines, maybe not.

DaveW

 

 the Singularity movement may be quite something:

http://www.time.com/time/health/article/0,8599,2048138,00.html

 It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

Read more: http://www.time.com/time/health/article/0,8599,2048138,00.html#ixzz1EFsUex6E

 

a Canadian viewpoint on it:

http://maisonneuve.org/pressroom/article/2010/aug/2/intelligent-universe/

 

and some skeptics

 http://maisonneuve.org/blog/2010/09/9/intelligent-reactions-intelligent-universe/

 

 

Fidel

DaveW wrote:
Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities.

Some say man has already begun the process of merging with machines. There are people with artificial hip, knee and other implants today. We have programmable pacemaker implants to maintain regular heartbeats. And the average home has a number of  appliances, each containing programmable microcontrollers to run them automatically. It was originally intended that the java would become a unversal programming language across all machines and appliances communicating with each other over the internet.

Will we merge, or will we become obsolete? Will we someday upload our consciousness to computers and become immortal? Or will it only ever be possible to merely mimic human consciousness electronically? Will the new machine consciousness be better and more efficient than the human brain is capable of?

Will a new intelligence represent a threat to the establishment?

DaveW

 

 

also, computers not that "smart"" in many ways:

http://www.cnn.com/2011/OPINION/02/17/pinch.watson.humans/index.html?ire...

Fidel

CNN.com wrote:
Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.

I think that it would be neat to travel to other planets in the future. But I think we should send robots, really intelligent robots for safety reasons. Enough of them might be able to terraform a small planet in preparation for a colony. I don't know for sure. The smartest mobile robot we have is up there on Mars and speaking java to mission control. It's not very smart though and said to consume an hour's worth of computing power just to decide how it will move about 15 feet or so. I think they will be smarter in the future by quite a lot.

Bladerunner 1982 wrote:
"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the darkness at Tannhauser Gate. All those moments will be lost in time, like tears in rain. Time to die." - Roy the replicant contemplating his own mortality

Pages