Home » Artificial Intelligence: It Will Kill Us by Jay Tuck (Full Transcript)

Artificial Intelligence: It Will Kill Us by Jay Tuck (Full Transcript)

It takes all these different pieces of information and turns it through fusion software into an understandable picture which goes way beyond, way beyond our vision.

Artificial intelligence only works if you have huge data masses. Artificial intelligence only works if you have big data, but big data only works if you have artificial intelligence to make sense of it because human beings can no longer sort and sift and order the huge volumes of data that we have collected.

And thus it is not surprising that the company that has the most information in the world – it’s probably the most powerful company in the world – Google, is very interested in artificial intelligence and has been traveling around the world as a shopping queen, buying all the companies that are dealing with robotics – this is one of their robots called Atlas. They’re buying our artificial intelligence, all the artificial intelligence companies from around the world.

Now, if you ask Google, it’s a peaceful robot, right? He doesn’t have a gun; he doesn’t throw atomic bombs. He just walks around and stands there.

But you may have seen the superimposes of DARPA – Defense Advanced Research Projects Agency. That is the research arm of the Pentagon. Then, you see the video is made by Lockheed Martin, which is one of the most powerful and influential and richest weapons companies in the world.

So why is the Pentagon investing this money? Why has Lockheed Martin taken over large aspects of the company? This guy is called “Big Dog.” He also belongs to Google, also DARPA financed. Peaceful dog, right? Unless he gets caught on a maneuver with the United States Marines, as part of a military unit.

So, these are not flower children. These are robots that have a function. And robots that have a function and an intelligence, and perhaps an intelligence that goes beyond us, are dangerous things.

ALSO READ:   How Technology is Killing our Eyes: Daniel Georgiev at TEDxVarna (Transcript)

Now that’s a Predator drone; this was taken at a secret United States Air Force base in New Mexico. Predator drones, you’ve seen them, right? On TV, in the newspapers — They’re old! They’re 20-years-old technology.

It looks very scary when the Spiegel and the ARD write about modern technology and the guys with a joystick that are killing people and Taliban in Afghanistan far away, but that’s what a modern drone looks like. This is not a Predator; it’s a Pegasus, an X-47B owned by the Navy. It’s a jet-powered machine not like a propeller-driven Predator. It goes 2,000 miles into enemy territory. It carries 2,000 kilos worth of explosives, and it’s run by artificial intelligence.

It starts alone, flies its mission alone, comes back alone, and here’s the clue, it lands all by itself on an aircraft carrier. Talk to any pilot you’ve ever met: what’s the most difficult landing area you can possibly imagine? They’d say it’s an aircraft carrier – short runway, thing’s moving – very hard.

This thinking drone can do it, but here are the two keys to Pegasus. Pegasus is invisible. I’m not talking about stealth and being invisible to radar. I’m talking about invisible to the human eye, and you won’t find this in any newspaper anywhere. It’s invisible to the human eye because the bottom has an LED layer on it, and the top has cameras, which have been removed here in the picture, which film the sky, and they project on the bottom a live picture of clouds up above the aircraft, and you can hardly see it.

They’re responsible for a lot of these UFO sightings in Nevada, near the testing areas. Jet engine propulsion, a reach of 2,000 miles, start and landing all by itself, stealth is optical stealth – you can’t see it – and the kill decision, which is required by United States law to be made by human beings – human beings must be in the loop before someone is killed by a drone – is in the machine, and it doesn’t need people.

ALSO READ:   Google (GOOG) Presents at Credit Suisse Technology Conference (Transcript)

It can decide by itself whether or not it kills somebody. The experts say it’s going to make less mistakes and less collateral damage than the human decisions. The kill decision in robots in the air, in robots on the ground, in robots in the water or underwater, where there are also drones, is made by or can be made by machines.

In my book, I quote many official United States government documents which say, “Our goal is to have the kill decision made by them.” The problem is, artificial intelligence sometimes makes mistakes.

This is Talon, an automatic cannon. You can put a lot of ammunition in that thing, and you can also put rockets on it. It’s in Iraq since 2007. At a demonstration with US generals and experts, the damn thing got out of control and started pointing at the audience. There was a marine there, thank goodness, running across the field, who tackled it like a football player and threw it on its side, and probably prevented a couple hundred people from being killed.

This was not a reason enough to take a lucrative contract away from the company that built it, and it wasn’t enough to take the Talon out of Iraq. It’s just sort of off-duty for a moment because, you know, there were some “early stages of development,” that kind of problem.

Pages: First | ← Previous | ... | 2 |3 | 4 | Next → | Last | Single Page View