Category Archives: Artificial Intelligence

Nevada Opens Its Roads to Self-Driving Cars

Nevada, which has been a pioneer for allowing the testing of autonomous vehicles, became the first state in the country to issue a license which will enable Google (and other companies) to test self-driving cars on Nevada roads.

Even so, the average driver won’t be able to head out to their local autonomous car dealership and purchase one anytime soon. As one might expect, this first-of-its-kind license required Google to present extensive documentation showing the safety record of previous tests and how drivers – two of whom will be required to be in the cars while testing – have been trained. Google and other companies looking to test self-driving vehicles in Nevada will also be required to purchase a pricey surety bond.

It seems especially appropriate that Nevada be the first state to take this step, as it hosted the finish line for the early DARPA Grand Challenges, where autonomous vehicle technology really took off. It’s hard to believe that a short eight years ago driverless cars were unable to complete more than 1/8 of a 150-mile course in the Mojave desert, and today they are able to capably share the roads with human drivers.

(Autonomous vehicle photo courtesy of Nevada DMV)

What will the winning humanoid robot need to conquer the next DARPA Grand Challenge?

Could Boston Dynamics' PETMAN be a platform for the next Grand Challenge?

To date, the DARPA Grand Challenge has been a competition to build and field driverless vehicles, and the challenges have been a remarkable success. Now it looks like the agency is challenging teams to tackle a much more difficult task: creation of an all-purpose robot that can perform many diverse tasks that to this point have been only in the domain of humans.

According to a post at the website Hizook, DARPA will require the winning robots to semi-autonomously complete the following:

1) The robot will maneuver to a open frame utility vehicle, such as a John Deere Gator or a Polaris Ranger. The robot is to get into the driver’s seat and drive it to a specified location.

2) The robot is to get out of the vehicle, maneuver to a locked door, unlock it with a key, open the door, and go inside.

3) The robot will traverse a 100 meter, rubble strewn hallway.

4) At the end of the hallway, the robot will climb an ladder.

5) The robot will locate a pipe that is leaking a yellow-colored gas (non-toxic, non-corrosive). The robot will then identify a valve that will seal the pipe and actuate that valve, sealing the pipe.

6) The robot will locate a broken pump and replace it.

So this tells me any successful entrant will require, at least:

  • A humanoid form. I know I mentioned it above, but apparently the rules do not specifically state a humanoid design is an absolute must. However, in order to operate a vehicle with a steering wheel or handlebars, designed to fit an average-sized human, as well as climb a ladder, this design seems to make the most sense, and is apparently what DARPA is looking for.
  • A high degree of manual dexterity. Although we’ve seen some novel solutions for gripping and manipulating objects, such as the universal jamming gripper, it seems like a hand with an opposable thumb would be the most obvious path here, especially if the robot needs to use this appendage for a diverse array of tasks (gripping a steering wheel, operating a lock, climbing a ladder, and so on) ill-suited for a more specialized appendage.
  • Advanced object recognition. Not only with the robot need to navigate around objects both while operating a vehicle and moving on its own, it will also need to identify a few very specific objects (a pipe leaking gas, a broken pump) and then identify ways to fix those objects. The required tasks do not state whether the robot would need to be able to discern the difference between a broken pump and an intact pump, but given the enormity of that assignment I think that might require some human assistance.
  • A portable, long-lasting power source. Currently, humanoid robots burn through power very quickly. (For reference, Honda’s ASIMO can run for one hour on a single charge, but ASIMO doesn’t move through rubble or climb ladders.) The challenge here will be balancing power with portability. It’s possible that teams could use a gasoline engine, like the kind that powers Boston Dynamics’ Big Dog.
Of course all of this is speculation, and I’m sure the brilliant minds that are sure to enter this challenge will be able to come up with some groundbreaking solutions to a lot of these problems. Given the enormous progress inspired by DARPA’s previous Grand Challenges, I can’t wait to see what teams design here.

(Via Danger Room)

 

Is mimicking human biology the path forward to developing human-like AI?

ECCEROBOT, billed by its designers at the University of Sussex as the “world’s first anthropomimetic robot,” was designed to mimic the form and function of the human body. Engineers created a synthetic skeleton to which they attached synthetic tendons and “muscles” with the goal of developing a robot that moves and interacts with the world as we do. Ultimately, researchers want to know if and how having a human-like body may help the machine develop human-like intelligence.

The concept is interesting and I think this type of research, from an engineering standpoint, could potentially have great use for development of artificial limbs and bionic body parts, which users require to match “the originals” as closely as possible. As an generalized approach for building humanoid robots, however, I’m not convinced that trying to copy biology as closely as possible is the most efficient or effective way to go. After all, biology and evolution are messy, and even if we did want to copy biological structures exactly, we are far from having adequate technology and materials to do so.

Given these physical limitations I’m interested to see how the software that powers ECCEROBOT might “learn” and develop. An alternative approach with the same goal might be letting an AI interact with avatars and virtual objects in a detailed virtual world, but again, we’ve got a long way to go before that becomes a viable research path.

ECCEROBOT and other humanoid robots are going to be featured in a BBC documentary, “The Hunt for AI,” which will air tomorrow.

(Via Techland)

New chip models two neurons, one synapse

Researchers at MIT have developed a new computer chip that models a single human brain synapse, forming a structure by which two artificial “neurons” can exchange information. The inventors have big plans for this technology, noting if scaled up, could form the foundation for “neural prosthetic devices.” They wouldn’t stop there, however:

The MIT researchers plan to use their chip to build systems to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers. Even on high-capacity computer systems, it takes hours or days to simulate a simple brain circuit. With the analog chip system, the simulation is even faster than the biological system itself.

Another potential application is building chips that can interface with biological systems. This could be useful in enabling communication between neural prosthetic devices such as artificial retinas and the brain. Further down the road, these chips could also become building blocks for artificial intelligence devices, Poon says.

The article notes the human brain has an estimated 100 billion neurons, each with synapses to many other brain cells. Obviously it will take an enormous feat of engineering to scale this new chip to a point where it would begin to simulate any degree of biological intelligence (for reference, a fruit fly has about 100,000 neurons).

Even if modeling biological systems is a dead-end path to strong AI, this technology could at least teach us a great deal about the brain, and if it puts us on a path to neural prosthetics, would be a tremendous breakthrough.

University of Miami to hold conference about robotics and the law

If my autonomous humanoid robot causes an injury to a guest in my home, am I liable? What if the injury could have been prevented by a firmware update that I willfully chose not to install? Or, what if an amputee, upon receiving a stronger-than-human bionic arm, gets in a fistfight and inadvertently kills his opponent with said robotic arm? Legal minds could explore these situations and many more at the University of Miami Law School’s “We Robot” conference, which will be held in Coral Gables, Florida, on April 21 and 22, 2012.

We seek reports from the front lines of robot design and development, and invite contributions for works-in-progress sessions. In so doing, we hope to encourage conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate.

Robotics seems increasingly likely to become a transformative technology. This conference will build on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues.

People will undoubtedly ask these questions on an increasing basis as robots become a more common part of our lives. We’re on the verge of a day when humans will be living and working with robots designed to operate in human environments, such as homes and offices, as opposed to environments designed for them, like a factory’s manufacturing floor. We’ll also be incorporating robotics into our bodies in ways ranging from bionic limbs and artificial organs to nanobots. When that day comes, and when something inevitably goes wrong, the law will need to address it. This conference is a step in telling us all how it will do so.

(Via Boing Boing)

The Singularity may be closer than Paul Allen thinks

Microsoft co-founder Paul Allen caused quite a stir among transhumanists and singularitarians this week when he penned an article titled, “The Singularity Isn’t Near.” In it, he and co-author Mark Greaves argue that while a Kurzweil-esque technological singularity “might one day occur,” it is a long way off – certainly further out than Kurzweil’s predicted date of 2045.

The authors’ rationale for the article rests on the fact that humans have barely begun to understand exactly how our own brains work, and therefore could not possibly create a human-equivalent (or smarter-than-human) AI without massive, revolutionary advancements in neuroscience and/or AI research occurring in the near future, of which Allen and Greaves  are skeptical.

But I think this is the fatal flaw in Allen’s and Greaves’ argument – those who believe the Singularity will occur in the next 20 – 50 years (including thinkers like Ray Kurzweil and Vernor Vinge, who coined the term) do not argue that a smarter-than-human AI needs to be modeled after the human brain, or employ human-like cognition. Indeed, there is a far better chance this AI will be totally alien – at this point, however, we simply don’t know. As the Singularity Institute notes, the reason we don’t know is because “we’re not that smart.” In other words, our inherent cognitive limitations make it difficult for humans to imagine how a vastly smarter alien intelligence will behave, or operate.

That said, technology marches on, and advancements in computing speed and power continue to escalate at an exponential rate. Allen and Greaves even note that we’re on the verge of developing Exaflop-class computers that “could probably deploy the raw computational capability needed to simulate the firing patterns of all of a brain’s neurons, though currently It happens many times more slowly than would happen in an actual brain.”

Given this dramatic increase in raw computational power and the advancements that will likely continue to occur in the decades to come, is it so unreasonable to think humans will see the birth of human-equivalent (if not human-level) or smarter-than-human AI? As plenty of thinkers have shown, Kurzweil among them, it is not.

New iPhone’s killer app – voice controlled personal assistant

Casual observers of this morning’s new iPhone 4S announcement might be disappointed in the device’s hardware upgrades. Essentially, Apple took the iPhone 4, dropped in a new chip (the same silicon that powers the iPad 2), a better camera, and some new internals and called it a day. However, they also announced a new functionality that could change the way many of us interact with our phones – “intelligent assistant” software called Siri.

On stage, Apple CEO Tim Cook demonstrated Siri’s abilities, which make it possible to perform a number of tasks through vocal commands and questions, including looking up information, scheduling meetings, obtaining restaurant recommendations (and making reservations) or sending text messages .

However, voice control has been around for years, and hasn’t caught on. So why should Siri be different? The apparent beauty of Siri is that the software is intelligent enough to understand what a user is asking even if the question isn’t completely direct. In other words, it allows humans to speak naturally rather than tailor their speech patterns to the machine.

For instance, many types of software might be able to provide an answer to “What will the weather be like today?” The thing is, humans don’t always ask questions like that. We might ask, “Should I wear a raincoat today?” Siri can determine you’re actually asking about the weather, and provide the appropriate response.

By itself, Siri is a neat trick, but it becomes useful when paired with Internet resources like Wolfram Alpha, Yelp, Google Maps, OpenTable, and so on. If it works as advertised, Siri may be the first time a gadget has delivered on the promise of legitimately useful voice control.

Chatbot passes Turing test, my results are underwhelming

Chatbots are programs designed to simulate conversation with an actual human, usually via text chat. Most of these programs wouldn’t fool even the most gullible people among us into thinking they possess any kind of intelligence, but a souped-up version of Cleverbot recently passed a Turing test at a tech conference in India.

The Cleverbot test took place at the Techniche festival in Guwahati, India. Thirty volunteers conducted a typed 4-minute conversation with an unknown entity. Half of the volunteers spoke to humans while the rest chatted with Cleverbot. All the conversations were displayed on large screens for an audience to see.

Both the participants and the audience then rated the humanness of all the responses, with Cleverbot voted 59.3 per cent human, while the humans themselves were rated just 63.3 per cent human. A total of 1334 votes were cast – many more than in any previous Turing test, says Cleverbot’s developer and AI specialist Rollo Carpenter.

Cleverbot is a bit different than most chatbots – instead of choosing from any number of canned responses, the program “learns” based on responses it receives from other conversations, and then integrates them into its repertoire. It then uses an algorithm to select an “appropriate” response. This can pay off with realistic responses or can give you truly bizarre answers, as seen below:

Keep trying, Cleverbot!

Cautiously optimistic about IBM’s “cognitive computing” chips

This week, IBM announced new computer chips it’s referring to as “cognitive computing.” According to press materials, the chips are inspired by neurobiology, and work like a biological brain, with neurons and synapses:

While they contain no biological elements, IBM’s first cognitive computing prototype chips use digital silicon circuits inspired by neurobiology to make up what is referred to as a “neurosynaptic core” with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons).

IBM has two working prototype designs. Both cores were fabricated in 45 nm SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.

IBM’s overarching cognitive computing architecture is an on-chip network of light-weight cores, creating a single integrated system of hardware and software. This architecture represents a critical shift away from traditional von Neumann computing to a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.

IBM’s long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.

Now I’m no big city neuroscientist, but as it stands, with 256 “neurons” the chips would possess less cognitive power than a nematode worm and several orders of magnitude less than that of a fruit fly. Even a chip system with 10 billion neurons would have about ten times less cognitive ability than a human (give or take), although with 100 trillion synapses that system would be massively parallel, enabling it to process an incredible amount of data.

Still, and correct me if I’m wrong, here, once you get to 10 billion simulated neurons, it seems like the logical next step will be 100 billion, which is meeting or exceeding human cognitive power, and then a trillion, and so on, and at that point you’ve got what amounts to a ridiculously powerful superintelligence.

Given IBM’s pioneering work in supercomputing and AI, I’m willing to give them the benefit of the doubt here. A radical new chip architecture is big news by itself, and if it works as promised, by learning through experiences rather than being programmed, we’ll have something new and exciting on our hands, and a potential path forward for true AI. Call it cautious optimism.

Radio station appoints AI “virtual assistant” as DJ

KROV-FM in San Antonio is preparing to unveil a new DJ who will play music, give weather and traffic updates, and provide banter between songs. The only catch is that this particular DJ is a “virtual assistant” computer program that can be purchased for a mere $200.

The move makes sense for radio, which has long seen shrinking audiences and therefore less revenue, which leads to tighter margins and the need for fewer staff. Even though this particular virtual assistant requires someone to write scripts, that’s apparently a lot cheaper than hiring on-air talent:

This operator work, according to Garcia, should be much cheaper labor than hiring a full-time human DJ and thus ultimately save radio stations millions of dollars.

“If you have a staff of five that is paid $100,000 a year each, that’s half-a-million dollars,” he said. “The entire (AI) program is $200, a one-time fee. You never have to pay an annual fee. It never has to go to the bathroom. It never goes on an egomaniac spree. It is always there.”

A part-time laborer could be hired as Denise’s human assistant, Garcia reckons, for about $10 an hour.

You can listen to an example of the program’s capabilities over at its vendor website. It’s nowhere close to sounding authentically human, but if all it needs to do is mention names of songs and musicians and give the occasional news update, I’m sure people will deal with it. And the fact that it’s an off-the-shelf piece of software that’s been adapted to fill the role of an on-air personality is kind of neat, too, in a hacker sort of way.