Is Artificial intelligence a threat?

Maybe you have watched one of those science fiction movies : I, robot,The terminator, etc,

Is the artificial intelligence a threat for human being? let’s read neuroscientist and philosopher Sam Harris point of view:”It’s really a failure to detect a certain kind of danger.I’m going to describe a scenario how the gains we make in artificial intelligence could ultimately destroy us or inspire us to destroy ourselves. ”

The scenario

It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software
just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? to prevent us from making improvements in our technology permanently, generation after generation Almost by definition, this is the worst thing that’s ever happened in human history. So the only alternative and this is what lies
behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what
the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us Now,
this is often caricatured,

And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us Now,

s I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario.It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us. Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes
we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building  we annihilate them without a qualm. The concern is that we will
one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

Intelligence is a matter of information processing in physical systems

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption, We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter  can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here,  and as long as we continue
to build systems of atoms that display more and more
intelligent behavior we will eventually, unless we are interrupted, we will eventually
build general intelligence into our machines.

It’s crucial to realize that the rate of progress doesn’t matter, because any progress
is enough to get us into the end zone. We don’t need Moore’s law to continue.
We don’t need exponential progress. We just need to keep going.

The second assumption is that we will keep going. We will continue to improve
our intelligent machines. The train is already out of the station, and there’s no brake to pull.Finally, we don’t stand on a peak of intelligence or anywhere near it, likely.

And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

The intelligence explosion

if we build machines that are more intelligent than we are,they will very likely explore the spectrum of intelligence in ways that we can’t imagine,  and exceed us in ways
that we can’t imagine. And it’s important to recognize tha this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter
than your average team of researchers at Stanford or MIT.  Well, electronic circuits
function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work,  week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

No safety concerns

The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle  that behaves exactly as intended. Well, this machine would be
the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials.  So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work. So what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages.  Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

But  what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.  And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

we have no idea how long it will take us to create the conditions to do that safely. 

Time’s over….?

One of the most frightening things, at this moment are the kinds of things that AI researchers say when they want to be reassuring And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away.  One researcher has said, “Worrying about AI safety is like worrying
about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your
pretty little head about it.” No one seems to notice that referencing the time horizon is a total non sequitur.If intelligence is just a matter
of information processing, and we continue to improve our machines, we will produce
some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely.  And if you haven’t noticed, 50 years is not what it used to be.This is 50 years in months.

This is how long we’ve had the iPhone. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing
to have an appropriate emotional response to what we have every reason to believe is coming. The computer scientist Stuart Russell has a nice analogy here.  He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.”

 Machines values’s?

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves They’ll be grafted onto our brains,  and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, is to implant this technology
directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology  have to be pretty much worked out before you stick it inside your head. The deeper problem is that building superintelligent AI on its own seems likely to be easier  than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race
is to win the world, provided you don’t destroy it in the next moment,


I think we need something like a Manhattan Project on the topic of artificial intelligence.  Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests.


Discover 3D design


Gearbest Merchant Stores From Just $0.99


Inspired from Sam Harris TED talks








One thought on “Is Artificial intelligence a threat?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s