Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

ERIC SCHMIDT: Here's What Humans Can Do That Computers Will Never Be Able To Do

$
0
0

Google chairman Eric Schmidt said in conversation with Glenn Beck that "there's something about humans that technologists always forget. Humans are creative and unpredictable."

Schmidt's point is that while humans can actually create new things that have never existed before, computers are only ever following a set of instructions. This echoes the sentiment of scientist and professor Douglas Hofstadter, who recently said that we have much further to go in development of artificial intelligence than most will admit.

Here's the full clip from The Blaze:

Join the conversation about this story »


Social Networks Like Facebook Are Finally Going After The Massive Amount Of 'Unstructured Data' They're Collecting

$
0
0

BII big data volume

The future of social media turns on being able to make sense of "unstructured data," the firehose of texts, posts, tweets, pictures, and videos that even the most powerful computers are unable to classify. 

Why is this important? Because social networks have only mined the tip of the iceberg in data terms— information such as likes, dislikes, occupation, location, and age.

That leaves a lot of other social activity that hasn't been parsed yet. More than 90% of social data is unstructured. 

In a new report from BI Intelligence, we show how social networks are in a race to innovate in areas like "deep learning," cutting-edge artificial intelligence research that attempts to program machines to perform high-level thought and abstractions. These advances are helping social networks and their advertisers glean insights from this vast ocean of unstructured consumer data. Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis.

Access The Full Report By Signing Up For A Free Trial Today >>

Here are some of the key takeaways from the report: 

In full, the report:

 

Join the conversation about this story »

Social Networks Are Investing Big In Artificial Intelligence

$
0
0

BII social interactions posts

Consumer Internet companies are in a race to build out their artificial intelligence talent and acquire the most advanced machine-learning systems. 

They're doing so in order to finally make sense of the massive amount of data they're collecting — from how people arrive at purchase decisions, to the meaning of the text in every user's posts.

In a recent report from BI Intelligence, we show how advances in "deep learning," cutting-edge artificial intelligence research that attempts to program machines to perform high-level thought and abstractions, are helping social networks and their advertisers glean insights from this vast ocean of unstructured consumer data. Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis.

Access The Full Report By Signing Up For A Free Trial Today >>

Here are some of the major acquisitions and hires from the AI field that occurred in recent months:

In full, the report:

 

Join the conversation about this story »

'DeepFace': Facebook's Face Recognition Gets One Step Closer To Human-Like Precision

$
0
0

facebook facial recognitionSoon enough, Facebook may be able to recognize the faces of your friends, family and co-workers just as precisely as you can.

The social media giant is further developing its facial verification technology to make it nearly as accurate as the human eye, according to a new blog post from the company and a report from MIT Technology Review

When asked if the faces of two unfamiliar people are the same, the average human will answer correctly 97.53 percent of the time, MIT says. Facebook's technology will be able to tell faces apart 97.25 percent of the time, which the company says is 25 percent more accurate than it was before.

Facebook's new software, known as DeepFace, performs facial verification—which distinguishes whether or not two images show the same face. This is one step further than mere facial recognition, which helps put a name to the face, although Facebook's Yaniv Taiman, who works on the company's AI team, tells MIT that DeepFace may improve facial recognition as well.

DeepFace accomplishes this in two steps. First, it corrects the subject's face so that the person is facing forward in the image.  It uses a 3D model of an "average" forward-looking face to nail down this angle. Then, the software uses a method known as deep learning, which means it simulates a neural network that can create a numerical description of reoriented face. If the software finds similar enough facial descriptions for two different images, it concludes that they must be the same face. 

Facial verification isn't new to Facebook. In fact, the social network began suggesting friends in tagged photos back in 2010. The DeepFace software, however, will likely prevent the website from mistakenly tagging you in photos as a friend that may have similar facial features. 

DeepFace is just a research project for now, according to MIT, but researchers will be presenting the technology at the IEEE Conference on Computer Vision and Pattern Recognition in June. 

facebook deepface facial recognition

SEE ALSO: Advertisers 'Are Now Openly Talking About Their Discontent' With Facebook

Join the conversation about this story »

Reinventing Social Media: Deep Learning, Predictive Marketing, And Image Recognition Will Change Everything

$
0
0

BII big data volume

The world's largest social networks are storing massive amounts of never-before-analyzed data that could reveal crucial information about consumers — from how people arrive at purchase decisions, to what services or goods they may need in the near future.

Potentially, this data could also make social networks like Facebook and Pinterest do a better job of showing users what they want to see, rather than content and ads they'd rather not waste time on. However, as much as 90% of the data is "unstructured," meaning it's spontaneously generated and not easily captured and classified. 

In a new report from BI Intelligence, we show how advances in "deep learning," cutting-edge artificial intelligence research that attempts to program machines to perform high-level thought and abstractions, are helping social networks and their advertisers glean insights from this vast ocean of unstructured consumer data. Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis.

Access The Full Report By Signing Up For A Free Trial Today >>

Here are some of the key takeaways from the report: 

In full, the report:

 

Join the conversation about this story »

Elon Musk And Mark Zuckerberg Have Invested $40 Million In A Mysterious Artificial Intelligence Company

$
0
0

brain wiring

Facebook's Mark Zuckerberg and Tesla's Elon Musk are investing $40 million into an artificial intelligence company called Vicarious alongside actor and tech investor Ashton Kutcher, reports the Wall Street Journal.

Vicarious is a company aiming to replicate the human neocortex as computer code. The neocortex is the rather important part of your brain that "sees, controls the body, understands language, and does math." Company founder Scott Phoenix told the WSJ that once they successfully accomplish this task, "you have a computer that thinks like a person except it doesn't need to eat or sleep."

After that, Phoenix said, the next task is to create "a computer that can understand not just shapes and objects, but the textures associated with them. For example, a computer might understand 'chair.' It might also comprehend 'ice.' Vicarious wants to create a computer that will understand a request like 'show me a chair made of ice.'"

If this sounds like a tall order, it's only because it is. The applications described above may still be decades out from happening, but there are more immediately useful ways of using this technology:

Facebook, for instance, wants to turn the massive amounts of information shared by its users into a database of wisdom. Ask Facebook a question, and, if all goes to plan, it will spit out an answer based on facts users have shared. Facebook is also using artificial intelligence for facial recognition to identify users in photos. Facebook recently hired a leader in artificial intelligence, Yann LeCun, to run a new lab.

Phoenix acknowledges that Vicarious has an uphill battle ahead. He told WSJ that it "won't make a profit anytime soon" and hasn't disclosed many details about how this technology will work. As for where the company's located, they won't even reveal a complete address in case someone should try to hack them.

This is not the company's first major injection of capital. In 2010, Peter Thiel invested $1.2 million, and a second group of investors (including Facebook co-founder Dustin Moskovitz) gave the company $15 million in 2012. 

Get more information on WSJ >>

Join the conversation about this story »

Artificial Intelligence Is The Next Frontier For Social Networks

$
0
0

BII big data volumeSocial networks are capturing a phenomenal amount of data on their users. But the vast majority of that information is unstructured and can't easily be put to use.

Now, though, Facebook, Twitter, LinkedIn, and others are beginning to use artificial intelligence techniques to build out their "deep learning" capacities. They're starting to process all the activity occurring over their networks, from conversations, to photo facial recognition, to gaming activity.  

In a recent report from BI Intelligence, we show how advances in cutting-edge artificial intelligence research, which program machines to perform high-level thought and abstractions, are helping social networks and their advertisers glean insights from this vast ocean of unstructured consumer data. Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis.

Access The Full Report By Signing Up For A Free Trial Today >>

Here are some of the major acquisitions and hires from the AI field that occurred in recent months:

In full, the report:

For full access to all BI Intelligence's charts and data on social media, mobile, video, e-commerce, and payments, sign up for a free trial subscription.

Join the conversation about this story »

Here’s What It Would Take To Upload Your Brain Into A Computer

$
0
0

Johnny Depp TranscendenceWhen Wally Pfister’s Transcendence is released on April 17, millions of moviegoers will be asking themselves, “Could we really upload Johnny Depp into a computer one day?”

In the spirit of "Could Bruce Willis Save the World?" and "Gravity Fact Check," we’d like to take Hollywood perhaps a bit too seriously and examine the scientific plausibility of what’s called “whole brain emulation.”

Whole brain emulation as depicted in Transcendence appears possible in principle. As a character from the movie's first trailer says, the mind is “a pattern of electrical signals,” nothing more. That might be controversial among the general public, but it is the near-universal consensus among cognitive scientists. And in principle, that pattern of electrical signals could be run (“emulated”) on a computer rather than in that lump of meat inside a human skull.

We've already successfully emulated (much) simpler things. If you’re old enough, you might have played games like Space Invaders on the Atari 2600 gaming console like one of us (Luke) did. The Atari 2600 had a processor called the MOS 6502 that’s long since obsolete. But we can emulate it exactly inside a modern-day computer. (Researchers do this kind of thing for preservation purposes: Physical chips decay and warp over time, but an emulation is just information, and can be copied and preserved indefinitely.)

Here’s what we mean by “emulate”: When young Luke pushed the “shoot” button on the Atari joystick, an electrical signal traveled from the joystick to the MOS 6502 processor, which in turn controlled—according to strict rules—how other electrical signals moved from circuit to circuit inside the processor. All this activity eventually sent a certain pattern of electrical signals to his TV, where Luke could see that his spaceship had just fired a laser blast at the invading aliens.

Johnny Depp, Transcendence computerBy scanning the processor’s map of circuits in high resolution, and by knowing the rules of how those circuits work, we can reproduce the same functionality in a modern computer, without the physical MOS 6502 processor. In the emulated processor, there’s no electrical current, just numbers; no physical circuitry, just rules for how the numbers change. But the result is that pressing a button connected to the MOS 6502 emulation produces the exact same pictures on the screen as pushing the “shoot” button on the old Atari joystick did. The physical MOS 6502 is no longer there, but the information pattern is the same, so identical inputs (button presses) produce identical outputs (images on the screen).

If you want to try this for yourself, head over to NESbox, where you can play thousands of old games like Super Mario World right there in your browser. These games weren’t rewritten to work in your browser: Instead, the original hardware (like 1991’s Super Nintendo) is emulated exactly in your browser, and thus the same inputs (the game file plus your button presses) result in the same outputs (moving images and sound).

In theory, it should be possible to do a similar thing with a human brain. We could simulate the brain’s physical, chemical, and electrical structure in such detail that we could run the brain on a computer, producing similar “outputs” (instructions to virtual limbs, mouth, and other organs) as a real brain would. But in practice, we’re lacking three major things. First, we can’t yet scan a human brain in nearly enough detail to map all its “circuits.” Second, we don’t understand the rules governing how neurons and other brain cells work nearly as well as we understand how computer circuits work. And third, we don’t have enough computing power to run the emulation, since the human brain is vastly more complicated than the MOS 6502 processor.

Transcendence AI DeppWe’re still pretty far away from meeting any of these conditions. Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

So, how scientifically realistic is Transcendence? From the trailers and the original screenplay, it seems likely to be wrong about many of the details, but it might be right about the eventual technological feasibility of whole brain emulation. But we must remember this isn’t “settled science”—it’s more akin to the “exploratory engineering” of pre-Sputnik astronautics, pre-ENIAC computer science, and contemporary research in molecular nanotechnologyinterstellar travel, and quantum computing. Science fiction often becomes science fact, but sometimes it does not. The feasibility of whole brain emulation is an open question, and will likely remain so for decades.

As for Johnny Depp, he was born in 1963, and by 2070 he’d be 107 years old. In the United States, life expectancy for males is just below 79 years. By our estimate, Jack Sparrow is unlikely to be digitally preserved so that he can star in Pirates of the Caribbean 16: Digital Sparrow. Whether that is a tragedy or a relief is a question beyond the scope of this piece.

SEE ALSO: Johnny Depp Was Trampled By A Horse On 'The Lone Ranger' Set

Join the conversation about this story »


The Evolution Of Artificial Intelligence In Movies Since The 1920s

$
0
0

johnny depp transcendenceTranscendence casts Johnny Depp as a brilliant scientist who plots out grand plans for The Singularity, only to become that omnipotent, sentient technological himself when an assassination attempt goes awry.

While the new film is a look at what happens when technology becomes humanoid, it’s certainly not the first movie to ever do so. In fact, cinema has been toying with the idea of The Singularity — the point at which A.I. acquires beyond-genius-level intelligence — since the 1920s, even if it was never called that back then.

The Singularity has been showing up in films for decades, ranging from talking, all-knowing computers who refuse to do what we say to robots who serve along humans without explicit direction or order. As such, there are some amazing examples of Retro Singularity, a primitive, Tomorrowland-esque version of the future that writers of the past may have not even known they were predicting.

Think all the way back to Metropolis, the 1927 film that brought us Maria, the robot who was so lifelike she threw an entire city into flux with their insatiable lust. When Maria is built, she resembles her inspiration so closely that it tricks the citizens of Metropolis into believing she’s the original. She’s burned as a “witch” because of their confusion – she walks, talks and persuades just as well as any woman.

Moving into the 1950s, when technology became more advanced and robots morphed into more than “tin cans,” there was Forbidden Planet, the film that introduced Robby the Robot. Robby was more than a servant or machine; he was a fully functional character who conversed with his human counterparts and offered his own ideas. While the voyagers of the cruiser C57-D remained stuck on the Planet Altair IV, it was Robby who detects the murderous creatures who have come to harm the humans. He’s smart enough to know what the humans cannot.

her joaquin phoenix 41Cartoons even got into the business of predicting The Singularity, with The Jetsons being the biggest perpetrator of showing a unique vision of the future. From 1962 until the late 1980s (and today in the safety of reruns), 2062’s favorite family was surrounded by visions of far-off technological greatness. Rosie the family maid and caretaker is in robot form, and just as much a part of George’s family as daughter Judy or his wife Jane. Rosie is sassy, intelligent and does almost as much of the parenting as George and Judy. Moreover, she’s respected as much as the humans in the household despite being an “outdated” robot model.

While at work (which is only a couple days a week, typical for the time), George works with R.U.D.I., a sentient computer that’s also his best friend. R.U.D.I. isn’t just George’s work – he has a personality and even exists in a league called the Society for Preventing Cruelty for Humans – indicating that in this corny, fun little cartoon, a number of advanced computers exist and are conversing with each other. They’ve also felt a need to protect their humans from any computers who might step out of line. In one episode, R.U.D.I. basically goes crazy when George and the family are transferred to a remote planet, injecting emotional intelligence into his (her? its?) unimaginable processing power.

It’s almost a precursor for last year’s Her, wherein Samantha (Scarlett Johansson), the iOS companion to the lonely Theodore (Joaquin Phoenix), becomes so much more than a computer program. She thinks, speaks and sings with Theodore and exists as his best friend and love, a hyperintelligent being far beyond what he ever thought possible when he first plugged her in. But what makes her the most fascinating is the moment in the film when she’s absent for a period of time; the silence is palpable in Theodore’s life. When she returns, she tells him that she and a group of other iOS’s shut themselves off in order to program some more advanced code. It’s a throwaway line, but eerie in its focus; Samantha is developed enough – as well as other systems in her universe – to think and morph without human intervention. They’re operating on their own.

Iron Man 3 Heads Up UIEven the Marvel universe has had its hands in the Singularity game more than once, dating back to the 1940s.

The “Captain America” comics, now adapted as Captain America: The First Avenger and Captain America: The Winter Soldier, depicted the Nazi doctor/villain Armin Zola’s transformation into a supercomputer housing his consciousness with great detail. As the computer Zola, he was kept alive (even if it’s housed in a desolate, dusty warehouse in the middle of nowhere) for decades longer than his human body could ever last – keeping his brain and wide breadth of knowledge in tact. It’s much like what happens to Depp’s character in Transcendence – a body out of function, but a brain plugged into a computing unit so powerful it houses all the secrets of its consciousness and more.

Of course, Marvel has also brought us J.A.R.V.I.S., the friend and computer system assisting Tony Stark in the Iron Man films and The Avengers. More than just a fancy operator, J.A.R.V.I.S. predicts Stark’s needs before he even asks them, builds him new equipment and consults him on his next moves. He’s another voice guiding him in his life, and a friend in times that he needs it; he’s got a mind of his own, and one that Stark desperately needs.

Since the Singularity hasn’t happened yet, technically even the modern versions here will be retro-looking versions at some point. Media has been toying with the idea for years through living robots and uploaded consciousnesses, but it will be most interesting to see where it goes from here. We’ll have to wait to see what living robot production designers come up with.

SEE ALSO: Johnny Depp's New Movie 'Transcendence' Says Technology Will Turn Us Into X-Men

Join the conversation about this story »

How 'Transcendence' Failed To Communicate A Real Possible Future To Mainstream Audiences

$
0
0

transcendence morgan freeman

In a misleading article on CNN.com this week, Americans were said to be “excited” and “upbeat” about the way technology will improve our lives in the future. The headline of the piece, though, claims it’s about Americans being “wary of futuristic science, tech.” The article reports the findings of a telephone survey that surprisingly wasn’t tied to the release of the movie Transcendence, which seems at first meant as a promotion of the real possibilities of artificial intelligence, mind uploading and nanotechnology.

Misleading in its own way, the movie begins with optimism about advances in A.I. research and then by the end has shown us the dangers of a self-aware omniscient computer that can create super soldiers, controlled via wifi and repaired via tiny, quick-acting robots. Audiences don’t seem to be walking away from the movie actually wary of this futuristic science and tech, though, because it plays out so far from believable that at many moments viewers are straight-up laughing at the way both the plot and science progress on screen.

But should the science of Transcendence be believed? And if so, should the movie have been more clear and genuine regarding the plausibility of what all occurs? 

Because the science involved is mostly still at speculative stages, it’s unknown what would in fact happen after a person’s brain was uploaded to a computer and then the Internet. It’s also unknown just how far nanobot capability will go once the technology is achieved on a foundational level. Theoretically, things could move at an astonishing pace once everything is set in motion. The exponential growth of technology so far is the big indicator that this will be the case.

However, even futurist Ray Kurzweil, who focuses on exponential growth for his Law of Accelerating Returns and his ideas on where technology is headed — ideas which clearly inspired much of Jack Paglen‘s script for Transcendence – spaces out over a few decades much of what we see in the movie as happening rather immediately and altogether within the span of only five years.

In his best-selling 2005 book “The Singularity is Near,” Kurzweil’s refined predictions for the future (begun in 1990′s “The Age of Intelligent Machines” and continued with 1999′s “The Age of Spiritual Machines”) estimate that medical-use nanobots will be here in the next decade, true A.I. will be achieved by 2029 and mind uploading will be possible in the 2030s, while what seems to be the sort of god-like A.I./human hybrid Johnny Depp‘s character becomes in Transcendence isn’t anticipated until sometime beyond 2045.

Of course, Transcendence is set sometime in the near future (possibly as far as the 2020s?) and winds up with a different chronological path due to special, imperative circumstances — a path that maybe, in that form, is realistic. Yet perhaps, as Matt Patches writes in a recommended piece at Nerve, it doesn’t and shouldn’t matter if it’s realistic, because it’s science fiction.

Day After Tomorrow

Well, science fiction can be outright fantasy and nightmare, but in some cases it ought to be in service to both the science and how that science affects human behavior and society and, in worst case scenarios, existence. Transcendence reminded me a lot of Roland Emmerich’s disaster movie The Day After Tomorrow, which was based on predictions regarding climate change, had legitimate advisers on the science involved and to a degree was reasonable in its overall projections of what could happen as a result of global warming. The only problem is that the movie depicts the progression way too fast, making it appear more implausible, to the point that it’s laughable.

For years I’ve wondered if The Day After Tomorrow wound up being so much of a joke that it did more damage for the climate change issue than good, even though Emmerich is a huge advocate for awareness about that issue.

Paglen, whose wife is a computer scientist who provided a lot of expertise on the movie’s subject matter, doesn’t seem to be as passionate about the pro or con of technology, even if he winds up painting its progress darkly. He’s more interested in just asking, “What if?” than in making the audience concerned about the science. Yet he makes a case for why we should be concerned anyway, through his plotting, even if he blows that case by similarly going too rapid with the speed of technological advancement.

The problem of speedy science doesn’t just affect movies dealing in important ideas like climate change and the reality of artificial intelligence. Another that comes to mind as being like The Day After Tomorrow and Transcendence is Ivan Reitman’s Evolution. That sci-fi comedy deals with the evolution of creatures and plant life from a single extraterrestrial organism, and this evolution occurs over days rather than millions of years, which makes it seem pretty silly.

But it’s intended to be a humorous story, and even if it weren’t, Evolution is still recognizable as more fantastic than scientific. It has no effect on the audience’s acceptance of evolution as a scientific theory, in part because the scenario is based in a chance accident of a very particular meteor hitting Earth, not something impossible but also not something expected with reasonable prognosis. Audiences may have found it all too ridiculous even for a fantastical comedy, however, due to that fast-paced scenario.

Dawn of the planet of the apesOne movie that works a lot better than the others is Rupert Wyatt’s Rise of the Planet of the Apes. It’s a relatively quick story, time-wise, from the introduction of a drug that increases intelligence to a chimpanzee inheriting that drug’s effects leading a revolution of hominid animals and subsequent viral extermination of the human race. Those are comparatively simple and few steps, though, especially compared to the scientific repercussion pile-ons of The Day After Tomorrow and Transcendence, and Wyatt paces the movie to give the feeling like the progression of those steps is not too quick.

Mainstream audiences aren’t necessarily not smart enough to comprehend sci-fi scenarios that progress more rapidly, but it’s understandable that they might prefer stories that take their time with such ideas. Imagine if the Terminator movies were condensed into one installment and that the time that lapsed over the course of the plot was fairly brief. It wouldn’t work nearly as well. Some stuff needs to just be hinted at or very briefly explained as background and back story.

Most sci-fi movies about technological advancements fortunately avoid following the stages of the advancement. It’s better, and easier on the audience, to jump into a futuristic plot involving robots or time travel instead of watching the genesis of the tech and how its progress unfolds (especially if the real science is only theoretical). If we do see stages, it’s good if there’s a clear indication of time passed, as in how it’s communicated that Doc Brown first got the idea for the flux capacitor in 1955 and then didn’t make it a reality for 30 years.

There’s definitely significant time passing in Transcendence, mainly between the time that Depp’s character’s mind is uploaded and the construction and operation of an enormous facility housing the character’s servers and his scientific research and invention. Maybe a couple years go by. That’s just not enough for something with a broader outline in real life, especially when that something is not as plain and familiar a concept as a spaceship or a time machine or even a robo-cop.

Any concepts intended to be a possible reality, though, still needs to be acceptable in addition to plausible. Technological progress can happen faster and faster, but it can’t really be introduced faster than people are able or willing to embrace. That is true for real life and the movies. CNN.com’s article notes that most people are not currently okay with computer implants or 3D printers that fix body parts, similar to what we see in Transcendence. In the movie, a lot of characters are also not okay with where the tech goes so quickly, and as is to be expected with Hollywood, what’s actually merely debates about tech in real life winds up being a physical, action-packed battle between the two sides.

Gradual progression of science and technology allows people to get used to the ideas, often without realizing they’re even getting used to new ideas. If what happens in Transcendence does come about in the next 30-40 years, we may slowly adapt to steps in that direction along the way. Similarly, if the movie had paced its events more slowly, more people would be buying the plot. Whether then they’d be excited or fearful of what’s to come would be up to them.

SEE ALSO: 'Transcendence' Is Johnny Depp's Fourth Box-Office Bomb In A Row

Join the conversation about this story »

This Is What It's Like To Work For Billionaire Microsoft Co-Founder Paul Allen (Absolutely Fantastic)

$
0
0

Paul Allen

Oren Etzioni is one of the big names in the Seattle-area tech scene thanks to his fantastic string of startup successes stretching from 1994 to 2013. Six startups, all successful exits, including one sold to Microsoft for about $110 million in 2008, and Decide.com acqui-hired by eBay last year.

He's also known for his decades-long role as a computer science professor at the University of Washington, where he's published truckloads of papers since 2004.

In other words, the man didn't exactly need a job. But Microsoft co-founder Paul Allen lured him away for a project too intriguing to turn down: running Allen's Institute for Artificial Intelligence, otherwise known as AI2. Together Allen and Etzioni want to create a computer so smart, with reading, reasoning and learning skills so great that it can pass classes and tests designed for humans.

This is not to be confused with the original, and somewhat related, Allen Institute for Brain Science, with a mission of understanding of the human brain and to further neuroscience research.

Allen is known for his extravagent lifestyle, museums, the $1.5 billion he's donated to charitable causes, and his investment firm Vulcan. He also owns two professional sports teams, the Seattle Seahawks football team and the Portland Trail Blazers basketball team.

Etzioni has been six months on the job. Not only is he thrilled with his new career, he tells us, but he also received one of the best employee perks we've ever heard of:

Business Insider: About your job at Allen's AI institute. You left your job as a professor at UW to do this, right?

Oren EtzioniOren Etzioni: Yes.

BI: What is one great project the AI institute is working on?

OE: We are building a program, called Aristo, that seeks to understand science at the level of a fourth-grader and prove it by taking a standardized science test (that it hasn’t seen before) and acing it.

That problem forces us to study fundamental problems in AI in understanding language, reasoning, and much more.

BI: What's it like to work with Paul Allen? Does the job come with access to Seahawks and Trail Blazers games?

OE: Paul Allen is very involved with (and passionate about) our intellectual challenges. I literally had a brain-storming sessions with him today.

My wife and I had the huge privilege of being invited to attend the Super Bowl. We got to watch the Seahawks win, and even attend the after party with the players, which was incredibly generous of Paul and the best perk I could imagine.

SEE ALSO: Salesforce.com showers employees with breathtaking views, swag and doggy daycare

Join the conversation about this story »

What The Heck Is Machine Learning?

$
0
0

Machine learning is a computer's way of learning from examples, and it is one of the most useful tools we have for the construction of artificially intelligent systems.

It starts with the effort of incredibly talented mathematicians and scientists who design algorithms (a fancy word for mathematical recipes) that take in data and improve themselves to better interact with that data. The algorithms effectively "learn" how to be better at their jobs.

Consider the spam filter working in the background to block your junk email. Since it has "studied" a large set of sample spam emails, it can come to mathematically "learn" what spam email looks like and accurately identify new spam before it leaks into your inbox.

An excellent documentary called "The Smartest Machine On Earth" tells the story of Watson, IBM's famous Jeopardy-winning supercomputer, and delves into how IBM used machine learning to make its creation into a game show champion.

The film introduces viewers to the concept by asking how to best teach a computer to identify the various types of letter "A"s out there — uppercase, lowercase, those written in unusual fonts, and so forth — and it turns out that there is no way to programatically instruct a computer to reliably identify a letter. Enter machine learning, in which the computer studies thousands of examples so that it can build its own mathematical model and eventually have no problem identifying our beloved first letter of the alphabet.

Stepping this example up, IBM had Watson process thousands of actual Jeopardy questions and their correct responses, which effectively taught Watson how to play the game. This "understanding" of Jeopardy combined with Watson's vast storehouse of data — encyclopedias, bibles, and the entire Internet Movie Database, just to name a few — is exactly the trick that enabled a cold, non-living computer to beat two thinking, breathing humans at an especially human game.

The relevant portion of the documentary is embedded below at the appropriate starting point. Just click the play button:

People are building businesses on top of this as well. Heyzap, a mobile app discovery service, figures out what kinds of apps its users like in order to offer them customized recommendations, and machine learning lives at the core of its recommendation engine.

Jude Gomila, founder of Heyzap, told us it works: "Every time Heyzap recommends an app to a user, we ping back to our machine learning engine to work out what to show. We take into account a multitude of contextual data, including looking at how the impact of the filesize of an app relates to the probability of installation given a particular mobile connection speed that the user is on. Through the billions of recommendations and data points we are collecting, this allows us to build correlations in a giant data set and for our algorithm to learn over time what it should be recommending. Machine learning is one of the most efficient technical ways that we have found to get the right apps to the right users."

It doesn't stop there — Wired notes that Amazon recommends loads of products on nearly every page, Pandora builds playlists around the sensibilities of a designated song or artist, and Google makes smart predictions on when you should leave your house to make a meeting on time, and it all happens via machine learning.

It is firmly established as a useful method in making computers more "intelligent" and likely represents the most effective tool we have for one day seeing truly artificial intelligent systems that can adapt and learn with us. It's simply a matter of how we apply it, whether it's to find your next favorite app, identify a letter, or to crush former Jeopardy champions.

Join the conversation about this story »

Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity

$
0
0

Screen Shot 2014 05 05 at 9.36.14 AM

We've previously reported on the realistic potential for malicious artificial intelligence to wreak havoc on humanity's way of life. Physicist Stephen Hawking agrees it's worth worrying about.

Current artificial intelligence is nowhere near advanced enough to actually be of sci-fi-movie-style harm, but its continued development has given rise to a number of theories about how it may ultimately be mankind's undoing.

Writing in The Independent, Hawking readily acknowledges the good that comes from such technological advancements:

Recent landmarks such as self-driving cars, a computer winning at "Jeopardy!," and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

But he keeps the negatives close to mind, writing that "such achievements will probably pale against what the coming decades will bring":

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

A scientist named Steve Omohundro recently wrote a paper that identifies six different types of "evil" artificially intelligent systems and lays out three ways to stop them. Those three ways are:

  • To prevent harmful AI systems from being created in the first place. We're not yet at the point where malicious AI is being created. Careful programming with a Hippocratic emphasis ("First, do no harm.") will become increasingly important as AI technologies improve
  • To detect malicious AI early in its life before it acquires too many resources. This is a matter of simply paying close attention to an autonomous system and shutting it down when it becomes clear that it's up to no good.
  • To identify malicious AI after it's already acquired lots of resources. This quickly approaches sci-fi nightmare territory, and it might be too late at this point.

Join the conversation about this story »

It Gets Pretty Weird When You Have Two 'Artificially Intelligent' Chatbots Talk To Each Other

$
0
0

robot

Alan Turing was a British mathematician and logician whose work laid the foundation for computer science and artificial intelligence as we know them today. His work during World War II helped cut wartime short by cracking codes generated by Germany's famous Enigma machine, and his theoretical "Turing machine" formed the conceptual basis for the modern personal computer. He's a giant among technologists to this day.

(Turing was also an out-of-the-closet gay man in a time when such a thing was illegal, and when confronted with his "crime," he chose chemical castration over prison time. While the Queen recently granted him a posthumous royal pardon, the modern British government did not exercise the same discretion.)

Turing famously proposed the "Turing test" as a way of gauging a machine's "intelligence." In a Turing test, a human user would interact with a computer via text, exactly as you might use instant messaging to communicate with a friend today. Sometimes there'd be a living, breathing person responding to you, sometimes you're merely communicating with software. If the human user is unable to distinguish between the human and the software respondent, that software is said to have passed the Turing test.

There are loads of chatbots out there today (who else remembers wasting time on AIM by talking to SmarterChild?), and they run the gamut from being totally dumb to dynamic conversationalists. One of the first notable examples is a bot called ELIZA, a project by Joseph Weizenbaum which he developed at MIT between 1964 and 1966. By asking lots of questions and repeating certain key words that came from the user's input, ELIZA could on a certain level simulate speaking to a therapist — ELIZA's most famous "script" was even called DOCTOR. You can chat with ELIZA over the internet right here.

On the other side of the conversational AI spectrum, you have PARRY, developed in 1972 by a psychiatrist named Kenneth Colby. Where ELIZA had its calm, inquisitive nature, PARRY was designed to simulate a schizophrenic. Vint Cerf, one of the fathers of the internet (and incidentally a recipient of the Turing Award) got the two to talk to each other in 1973.

Here are two highlights from that "conversation."

***

PARRY: People get on my nerves sometimes.

ELIZA: I am not sure I understand you fully.

PARRY: You should pay more attention.

ELIZA: Suppose you should pay more attention.

PARRY: You're entitled to your own opinion.

ELIZA: What makes you think I am entitled to my own opinion?

PARRY: It bothers me just to be around people in general.

ELIZA: Please go on.

PARRY: I went to the track at Bay Meadows a while back.

ELIZA: What does that suggest to you?

PARRY: I went to the races.

ELIZA: What else would you like to discuss?

PARRY: Horseracing interests me.

***

PARRY: Do you know anything about bookies?

ELIZA: What makes you think I know anything about bookies?

PARRY: A bookie didn't pay me off once.

ELIZA: What does that suggest to you?

PARRY: I beat up the bookie who did not pay off, and I thought he might use his friends in the underworld to get even with me.

ELIZA: What else would you like to discuss?

PARRY: What do you know about racketeers?

ELIZA: Why do you ask?

PARRY: What experience have you had gambling?

ELIZA: Why does that question interest you?

PARRY: I like horseraces.

ELIZA: What else do you like?

PARRY: I know the mob controls the big rackets.

ELIZA: Who are you really talking about?

PARRY: Have you ever really thought about the MAFIA?

***

It's clumsy, weird, and awkwardly funny. Let's see what happens when we try this with more modern chatbots readily available for use in a web interface. I took the opening line from Cleverbot, fed it to ALICE, then copied and pasted ALICE's response back to Cleverbot. In the screenshot below, "User" is ALICE.

cleverbot and alice

Let's draw out this awkward fun a little longer. I used the same method to get Jabberwacky to talk to Prob.

Screen Shot 2014 05 30 at 10.49.01 AM

Artificial intelligence is really, really hard. It's easy to point to systems like Apple's Siri and IBM's Watson as being artificially intelligent, but a lot of smart people have problems with this descriptor. Most notable among them is Douglas Hofstadter.

"Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, 'read' is the wrong word," he said in this interview with Popular Mechanics. "It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous."

I'm not putting these novelty chatbots on the same level as Watson, but this exaggerated example serves the purpose of illustrating Hofstadter's words more clearly. Computer scientists are still plugging away at this, however, even so specifically as to turn the Turing test into a competition. The Loebner Prize has been held every year since 1990, and computer scientists engage in the Turing test exactly as described above with the aim of building a chatbot that can fool participants into thinking it's human. A winner can walk away with $100,000 in prize money, and this all helps plant the seeds for the sci-fi future where we talk to machines like we talk to our friends.

A video from the 2009 competition is embedded below:

Join the conversation about this story »

Beyond Factory Robots: Market Forecast And Growth Trends For Consumer And Office Robots

$
0
0

Robots have fascinated us for generations. The idea that we might be able to create and control machines that are smarter, faster, and more resilient than humans has proven irresistible. Futurists have long predicted a world in which robots perform dull, dangerous, or dirty tasks so humans can focus their energies elsewhere.
MasterRobots_BII

And in fact, robots have been a reality on factory assembly lines for over twenty years. But it is only relatively recently that robots have become advanced enough to penetrate into home and office settings. 

In a new report from BI Intelligence we assess the market for consumer and office robots, taking a close look at the three distinct categories within this market — home cleaning, "telepresence, and home entertainment robots. We also examine the market for industrial manufacturing robots since it is the market where many robotics companies got their start, and remains the largest robot market by revenue. And finally, we assess the factors that might still limit the consumer robot market.  

Access The Full Report And Data By Signing Up For A Free Trial

Three major trends have made consumer robot development possible on a significant scale:

  • Artificial intelligence and navigation systems have advanced enough to allow for the development of autonomous or semi-autonomous robots that are able to perform tasks on our behalf without constant supervision. 
  • The ubiquity of the internet and the rise of hand-held computing gadgets is another important factor. Mass-market consumer and business robots will be controlled by mobile hardware and software: they’ll be programmed and directed by mobile apps, and receive software upgrades through the cloud. Robotic devices will be able to outsource many computing tasks to companion devices, such as smartphones and tablets. 
  • The rise of assistive intelligence is the driving force behind some of the most interesting and commercially promising cutting-edge mobile services. Apps like Google Now and Apple’s Siri are meant to understand and even predict our individual informational and online needs. Robots are the physical counterpart to these digital concierges. They incorporate some of the same problem-solving skills expressed in digital assistant-type apps, but also help us solve problems in the physical world: vacuuming dirty kitchen floors, cleaning our gutters, entertaining our children.

Here are some of the most important takeaways from the report:

For full access to the report on robots and all BI Intelligence's coverage of the mobile, payments, e-commerce, and digital media industries, sign up for a free trial.

 

Join the conversation about this story »


Social Networks Still Haven't Unlocked Their Full Value To Marketers And Advertisers

$
0
0

Much of the value that social networks offer marketers and advertisers is still untapped, locked in what is known as unstructured data, the billions of user-generated written posts, pictures, and videos that circulate on social media.

BII big data volume

So far, social networks have only mined the tip of the iceberg in data terms — information such as likes, dislikes, occupation, location, and age.

That leaves a lot of other social activity that hasn't been parsed yet. More than 90% of data was unstructured in 2010. 

Recently, Facebook announced a new Shazam-like feature that will recognize what music a user is listening to or what TV show they're watching and prompt them to include the information in a post they're writing. This is an example of how Facebook is turning unstructured data — lots of people are already writing about the TV shows they're watching but Facebook can't easily capture the info — into structured data.

In a recent report from BI Intelligence, Business Insider's subscription research service, we show how social networks are in a race to innovate in areas like "deep learning," cutting-edge artificial intelligence research that attempts to program machines to perform high-level thought and abstractions. These advances are helping social networks and their advertisers glean insights from the vast ocean of unstructured consumer data. Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis.

Audience targeting and personalized predictive marketing using social data are expected to be some of the business areas that benefit the most from mining big data — 61% of data professionals say big data will overhaul marketing for the better, according to Booz & Company.

Access The Full Report By Signing Up For A Free Trial Today >>

In full, the report:

The report is full of charts and datasets that can easily be downloaded and put to use.

 

Join the conversation about this story »

The Market For Home Cleaning Robots Is Already Surprisingly Big, And There's Plenty Of Room For Growth

$
0
0

Futurists have long predicted a world in which robots perform dull or dirty tasks so humans can focus their energies elsewhere.

MasterRobots_BIIThat future is here, thanks to robotic appliances that help home owners take care of boring chores like vacuuming, mopping, and even cleaning gutters and pools.

Home care has become the first breakout market for a growing new device category: consumer robots. 

In a new report from BI Intelligence we assess the market for consumer and office robots, taking a close look at the three distinct applications within this market, and how this emerging category now represents nearly all the growth in the increasingly diverse global robotics industry. 

Consider: 

Access The Full Report And Data Today By Signing Up For BI Intelligence

In full, the report:

Sign up today for full access to the report on robots and all BI Intelligence's coverage of the mobile, payments, e-commerce, and digital media industries.

BII_VacuumMarket

 

Join the conversation about this story »

Why The 'Super Computer' That Won The Turing Test May Not Be As Smart As You Think

$
0
0

Alan Turing

On Saturday, a chat bot named Eugene Goostman was recognized as the first computer to officially pass the Turing Test.

Some, however, have expressed skepticism at this claim, especially since other computers have similarly duped humans in the past. 

The Turing Test is a trial developed by famous mathematician Alan Turing, who worked on cracking the Enigma code during the second World War.

The test seeks to find whether or not judges would be able to decipher computer responses from human answers.

If the computer is able to make 30% of the judges believe it's human, it passes the test.  

During the University of Reading's test on Saturday, which took place at the Royal Society in the UK, Eugene Goostman fooled 33% of the judges into thinking he was a 13-year-old boy from the Ukraine. 

The University of Reading claims that this is the first time a computer has ever truly passes the Turing Test, but some other computer-generated responses have tricked humans just as well in the past.

EugeneGoostman

In 2011, a chatbot named Cleverbot convinced 59% of the judges that it was human using a similar tactic. During the event, 30 volunteers engaged in a four minute conversation with an unknown entity, New Scientist reports.

Half of the participants spoke with Cleverbot while the other half chatted with real humans. These conversations were shown on large TV screens for the audience to see. Both the volunteers and audience members voted, with 1,334 votes cast. 

Eugene, by comparison, only convinced 33% of 30 judges that it was human after a series of five-minute conversations.

Accomplished computer scientists and tech industry investors have come forward to express their skepticism about the results. 

For instance, Chris Dixon, co-founder of Hunch.com and general partner at Andreessen Horowitz, posted a string of tweets questioning how the competition was judged.

ChrisDixonTweet1ChrisDixonTweet2ChrisDixonTweet3

Scott Aaronson, a computer scientist and faculty member at MIT, challenged Eugene in conversation to illustrate how robotic his responses seem. Here are some of the funniest snippets from Eugene and Aaronson's conversation:

EugeneConvo

 Professor Murray Shanahan of the Department of Computing at Imperial College London told Buzzfeed that he doesn't believe Eugene truly passed the Turing Test, saying the following:

Of course the Turing Test hasn't been passed. I think it's a great shame that it's been reported that way, because it reduces the worth of serious AI research. We are still a very long way from achieving human-level AI, and it trivialises Turing's thought experiment (which is fraught with problems anyway) to suggest otherwise.

Shanahan also told Buzzfeed that the Turing Test is a poor means of testing artificial intelligence since it puts too much emphasis on language. Human intelligence also involves the way we interact with the physical world, which is something the Turing Test doesn't take into account when it comes to measuring artificial intelligence.

SEE ALSO: Startup Founder Has A Clever Way To Stop Recruiters From Harassing His Employees

Join the conversation about this story »

No One's Talking About The Amazing Chatbot That Passed The Turing Test 3 Years Ago

$
0
0

rollo carpenter

The artificial intelligence world was largely abuzz this week upon the news that a computer program designed to simulate a text-based conversation with a 13-year-old Ukrainian child had passed the iconic "Turing test," successfully convincing several human judges that they were actually communicating with a flesh-and-blood youth named "Eugene Goostman."

Mathematician Alan Turing first proposed the test in 1950 as a benchmark to answer a simple question: can machines think?

A piece of software that can communicate with a person and successfully be considered "human" is said to have passed the Turing test. The only matter is one of threshold — what percentage of the time must the program be able to imitate humanity?

The Eugene Goostman computer program had 33% of judges fooled this week, which was deemed enough to pass the test. But almost three years ago, a chatbot called Cleverbot fooled a whopping 59.3% human participants in a similar Turing test. (The real, living humans who participated were only rated 63.3% human by the other judges!)

Artificial intelligence developer Rollo Carpenter took Cleverbot online in 1997 under a different name, where it has since gone through a number of redevelopments that allow it to harness a huge amount of data based on its conversational exchanges with people over the internet. In "approximately 1988," Carpenter saw how he could create a chat program that would start to learn by generating its own feedback loop — things said to it become things it says to other people, and it starts to learn how to respond in ways people want it to respond, including in different languages.

"The organizers [of this week's Turing test] aimed to get press," Carpenter told Business Insider in an interview. "They were aware it would be a big news story, and in a sense they were naive with their belief that they could get away with not revealing the nature of the test. It was declared that it was a five-minute test, but it was not declared that it was conducted on a split screen. This cuts the value of the results in two."

Instead of communicating with one human-or-computer entity at a time, human judges chatted on a split screen with a human and a computer at the same time. Each session lasted for five minutes, meaning judges were effectively only spending 2.5 minutes with each entity.

"It'd be a more reasonable and proper test to hold a conversation where the entity on the other end is randomized and you then have to decide which it is. Otherwise you're declaring knowledge in advance, and that's not the way a Turing test should be run," Carpenter said. "The 30% 'pass' requirement makes a bit of nonsense of the entire story and should not have been accepted in a contest. 50% is better number, more human. If the duration of the conversation needs to be five minutes then that should be the maximum, but only while holding a conversation with one entity at a time."

So why is everyone raving about Eugene Goostman when Cleverbot already pulled off quite the AI feat three years ago? Carpenter explains:

"In 2011 when Cleverbot achieved 59%, I left it open to interpretation as to whether a Turing Test had been passed, and that message was perhaps harder to pick up," Carpenter said. "The New Scientist and some other did. But also, the power of a press release from a University with Royal Society was simply considerably greater."

Cleverbot tweets out some of the more humorous or surprising exchanges it has with people chatting with it. Some of our favorites are below, with the human user indicated by the "|" symbol.

Screen Shot 2014 06 12 at 12.50.36 PMScreen Shot 2014 06 12 at 12.50.54 PMScreen Shot 2014 06 12 at 12.51.12 PMScreen Shot 2014 06 12 at 12.51.24 PMScreen Shot 2014 06 12 at 12.52.31 PM

SEE ALSO: It gets pretty weird when you have two AI bots chat with each other

Join the conversation about this story »

How We Could Actually Measure Artificial Intelligence

$
0
0

robot

A chatbot pretending to be a 13-year-old Ukrainian boy made waves last weekend when its programmers announced that it had passed the Turing test.

But the judges of this test were apparently easily fooled, because any cursory exchange with ‘Eugene Goosterman’ reveals the machine inside the ghost. Maybe the time has come, 60 years after Alan Turing’s death, to discard the idea that imitating human conversation is a good test of artificial intelligence.

“I start my Cognitive Science class with a slide titled ‘Artificial Stupidity,’” said Noah Goodman, director of the computation and cognition lab at Stanford University. “People have made progress on the Turing test by making chatbots quirkier and stupider.” Non-sequitur, spelling errors, and humor all make a chatbot seem more human.

The history of the Loebner prize, an annual Turing test competition, confirms this trend. Last year’s contest was won by a bot named Mitsuku also pretending to be young ESL speaker, a silly Japanese girl.

Even Turing anticipated that evasion might be the most human answer to a hard question:

Q: PLEASE WRITE ME A SONNET ON THE SUBJECT OF THE FORTH BRIDGE.

A : COUNT ME OUT ON THIS ONE. I NEVER COULD WRITE POETRY.

If not the Turing test, is there an alternative measure of intelligence that would bring out the best in our machines? Experts have suggested an array of challenging tasks in the very human domains of language, perception, and interpretation. Perhaps a computer passing one of these tests would seem not just like a person, but like an intelligent person.

Let’s look first at language comprehension, as computers can easily interact with text. In the following sentence, the person referred to by “he” depends on the verb: “Paul tried to call George on the phone, but he was not [successful/available].”

You, human reader, know that if he is not successful then “he” is Paul, and if he is not available then “he” is George. To figure that out, you needed to know something about the meaning of the verb “to call.” Machine learning researcher Hector Levesque of the University of Toronto proposes that resolving such ambiguous sentences, called Winograd schema, is a behavior worthy of the name intelligence.

Because humans interact with the world through sight and sound, not strings of letters, a stronger test of human-like intelligence might include speech and image processing. Computer speech and text recognition has improved rapidly in the last twenty years, but are still far from perfect.

When asked a question about the Turing test, Apple’s Siri answered about a “touring” test. Bots struggle to decipher squiggly letters, which is why you have to fill out a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) when you sign up for things like Facebook.

Humans are also exceptionally good at recognizing faces. At the age of six months, a typical baby can pick out its mother’s face from a crowd. Computer vision researcher Avideh Zakhor at UC Berkeley says we should aim for computers to “be as good as the best human, or better than the best human” at recognizing objects and people.

How the NSA Could Bug Your Powered-Off Phone, and How to Stop Them

We could further ask for the computer to interpret audio-visual phenomena and then reason about them. “An example of a task is a system providing a running commentary on a sporting match,” said Michael Jordan, a machine learning researcher at UC Berkeley.

“Even more difficult: The system doesn’t know about soccer, but I explain soccer to the system and then it provides a running commentary on the match.” That goal won’t be scored for a while.

A computer capable of achieving any of these tasks would certainly be impressive, but would we call it intelligent? Fifty years ago, we thought a computer that could beat a grandmaster at chess would necessarily be intelligent, but then Deep Blue passed that test, yet can’t even play checkers. Watson, the Jeopardy computer, knows more than Deep Blue about the human world of drinks and cities and movies, but it can only answer one kind of question about those things (in the form of a question of course).

The Mystery of Go, the Ancient Game That Computers Still Can’t Win

As computers become more powerful and pervasive, our standards shift. Fifty years from now, a soccer-learning, header-calling, wise-cracking machine might seem more like a party trick than a thinking being.

“If you fix a landmark goal, you tend to end up with systems that are narrow and inflexible,” said UC Berkeley computer scientist Stuart Russell. “In developing general-purpose AI we look for breadth and depth of capabilities and flexibility in developing new capabilities automatically.” A different kind of mission might be preferable, one which can expand with our own abilities and desires, something in the spirit of Google’s quest to “organize the world’s information.”

After all, UPS already routes millions of packages a day, hospitals sequence patients’ DNA to find cancer-causing mutations, and Google can in a millisecond report the age at which children begin to recognize their mother.

Everything Science Knows About Hangovers—And How to Cure Them

These abilities are ”fricking fantastic, and way beyond the capability of a person,” said Goodman, the computation and cognitive science researcher at Stanford. “So in some sense the programs are super intelligent, super human, but because of our common-sense notion, we say that’s not intelligence, that’s something else.”

As functions proliferate, some may become united behind a more flexible user interface and be powered by a deeper corpus. There might be a machine that can teach you a dance that it learned by watching YouTube and diagnose a disease by smelling your breath. You could ask that machine to simulate human behaviors in order to pass the old Turing test, but that would be insulting to everyone’s intelligence.

More From Wired:

How to Fix Your iPhone’s Annoying Autocorrect 

30 Years After Chernobyl’s Meltdown, Gripping Photos Expose the Human Fallout

Scientific Tips for Peeing Like a Proper Gentleman

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>