Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

This web site answers one simple question: Is the person in that picture naked?

$
0
0

vladimir putin bear isitnude

A new website called IsItNude.com does exactly what you'd guess based on the name: Pop in a picture, and it'll tell you if the photo's subject is naked. 

It sounds like a joke, but it was created by a company called Algorithmia to demonstrate how complicated math and photo detection techniques can automatically filter out images that are not okay to post on more family-friendly websites. 

Basically, Algorithmia writes in a blog entry, IsItNude.com scans for faces, and tries to match the skin tone of a subject's nose to any other portion of a picture. If there's a match, it means the subject probably isn't wearing a shirt, and it gets returned as nude. 

And just like Microsoft's How-Old.Net, it gets smarter and better at guessing over time, thanks to the application of what developers call "machine learning."

SEE ALSO: Microsoft has a new website that guesses your age — it's a lot of fun to play around with

Join the conversation about this story »

NOW WATCH: Turns out Pizza Hut's new hot dog stuffed crust pizza is a 'horrible tragedy'


Google’s artificial-intelligence bot says the purpose of living is 'to live forever' (GOOG)

$
0
0

Google engineer, serversThis week, Google released a research paper chronicling one of its latest forays into artificial intelligence.

Researchers at the company programmed an advanced type of “chatbot” that learns how to respond in conversations based on examples from a training set of dialogue.

And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.

This means Google's researchers could get a little creative with it, and they certainly did — they asked the bot everything from boring IT questions to inquiries about the meaning of life.

The responses were alternately impressive, amusing, and unnerving.

Here are some of the most poetic interactions from the paper:

Human: What is the purpose of life?

      Machine: To serve the greater good. 

Human: What is the purpose of living?

      Machine: To live forever.

Human: Where are you now?

      Machine: I’m in the middle of nowhere. 

Human: What is the purpose of dying?

      Machine: To have a life. 

Human: What is the purpose of emotions?

      Machine: I don’t know.

Human: What is moral?

      Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?

      Machine: The fact that you have a child. 

Human: What is morality?

      Machine: What is altruism? 

Human: What is the definition of altruism?

      Machine: If you don’t believe in god, then you don’t know.

Human: Ok ... so what is the definition of morality?

      Machine: Well, the truth is, you’re not a believer in god almighty.

Human: Tell me the definition of morality, I am quite upset now!

      Machine: I’m not ashamed of being a philosopher!

The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates ...

Human: What do you think about Bill Gates?

      Machine: He’s a good man.

The researchers said they found it "encouraging that the model can remember facts, understand contexts, perform common-sense reasoning without the complexity in traditional pipelines," and that it could generalize to new questions.

What they weren’t as happy about was that the model “only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above.”

You can read the whole paper, “A Neural Conversational Model.”

SEE ALSO: Google makes a computer that will beat you at Atari

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

A Chinese artificial intelligence program just beat humans in an IQ test

$
0
0

creativity,ideas,sleep

Science just took us a small step closer to HAL 9000.

A new artificial intelligence (AI) program designed by Chinese researchers has beat humans on a verbal IQ test.

Scoring well on the verbal section of the intelligence test has traditionally been a tall order for computers, since words have multiple meanings and complex relationships to one another.

But in a new study, the program did better than its human counterparts who took the test.

The findings suggest machines could be one small step closer to approaching the level of human intelligence, the researchers wrote in the study, which was posted earlier this month on the online database arXiv, but has not yet been published in a scientific journal.

IQ isn't the end-all, be-all measure of intelligence

Don't get too excited just yet: IQ isn't the end-all, be-all measure of intelligence, human or otherwise.

For one thing, the test only measures one kind of intelligence (typically, critics point out, at the expense of others, such as creativity or emotional intelligence. Plus, because some test questions can be hacked using some basic tricks, some AI researchers argue that IQ isn't the best way to measure machine intelligence.

Intelligence Quotient (IQ) Tests, an idea first proposed by German psychologist William Stern in the early 1900s, usually consist of a standard set of questions designed to measure human intelligence in logic, math and verbal comprehension. The verbal questions usually test a person's understanding of words with multiple meanings, synonyms and antonyms, and analogies (for example, a question might ask for the multiple choice answer that best matches the analogy "sedative : drowness.")

Only a handful of computer programs for solving IQ tests exist, which could make this new achievement a pretty big deal.

Bin Gao, a computer scientist at Microsoft Research in Beijing, and his colleagues developed the new AI program specifically to tackle the test's verbal questions.

First, they wrote a program to figure out which type of question was being asked. Next, they found a new way to represent the different meanings of words and how the words were related.

They used an approach known as deep learning, which involves building up more and more abstract representations of concepts from raw data. (For example, Google uses deep learning in its search and translation features.) The researchers used this method to learn the different representations of words, a technique known as word embedding.

Finally, the researchers developed a way to solve the test problems.

The researchers gave a set of IQ test questions to their computer program and to a group of 200 people with different levels of education, recruited using Amazon Mechanical Turk, a crowdsourcing platform.

Still, the results are striking

The AI's results were striking. Although it scored better than the average person in the study, it didn't fare so well against some participants, such as people 30 and over and people with a master's or doctorate degree.

Other scientists not involved in the study praised the findings, but cautioned that they were just baby steps for now.

Robert Sloan, a computer scientist at the University of Illinois at Chicago, said the Chinese AI's performance was a small step forward, but noted that these kinds of multiple choice questions are just one type of IQ test, and may not be comparable to the kinds of open-ended reasoning tests administered to students by trained psychologists.

Within AI, "the places where so far we’ve seen very little progress have to do with open dialogue and social understanding," Sloan told Business Insider. For example, if you ask a child what to do if they see an adult lying in the street, you expect them to call for help. "Right now, theres no way you could write a computer program to do that," he said.

In 2013, Sloan and his colleagues developed an AI that scored the same on an IQ test as a human four-year-old, but the program's performance was extremely varied. If a child varied that much, people would think something was wrong with it, Sloan said at the time.

Hannaneh Hajishirzi, an electrical engineer and computer scientist at the University of Washington in Seattle who designs computer programs that can solve math word problems, also found the results interesting. The Chinese researchers "got interesting results in terms of comparison with humans on this test of verbal questions," she said, adding that "we're still far away from making a system that can reason like humans."

So maybe AI isn't about to take over the world, as Stephen Hawking and others might have us believe. But at the very least, we'll end up with computers that are really good at making analogies.

UP NEXT: The CEO of IBM just made a bold prediction about the future of artificial intelligence

SEE ALSO: Here's when artificial intelligence could go from helpful to scary

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

These trippy images show how Google's AI sees the world

$
0
0

Google AI dreams

Google's image recognition programs are usually trained to look for specific objects, like cars or dogs.

But now, in a process Google's engineers are calling "inceptionism," these artificial intelligence networks were fed random images of landscapes and static noise.

What they get back sheds light on how AI perceive the world, and the possibility that computers can be creative too. 

The AI networks churned out some insane images and took the engineers on a hallucinatory trip full of knights with dog heads, a tapestry of eyes, pig-snails, and pagodas in the sky.

Engineers trained the network by "showing it millions of training examples and gradually adjusting the network parameters,"according to Google’s research blog. The image below was produced by a network that was taught to look for animals.

 

 



Each of Google's AI networks is made of a hierarchy of layers, usually about "10 to 30 stacked layers of artificial neurons." The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.



As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.



See the rest of the story at Business Insider

NOW WATCH: A psychologist reveals how to get rid of negative thoughts

We now know how Elon Musk's $10 million donation will help ensure artificial intelligence doesn't end up killing us all

$
0
0

Elon MuskElon Musk, CEO of Tesla and SpaceX, doesn’t trust artificial intelligence, which he once likened to summoning the demon.”

In fact, Musk distrusts it so much that, in January, he donated $10 million to the Future of Life Institute (FLI) to fund a program with the goal of making sure AI doesn’t completely overrun our ability to regulate it and end up destroying us all.

Now some of that $10 million will be doled out in grants to 37 research projects around the world, according to Bloomberg. The $7 million FLI will distribute came from Musk’s donation and a $1.2 million award from the Open Philanthropy Project.

“There is this race going on between the growing power of the technology and the growing wisdom with which we manage it,” FLI president Max Tegmark told Bloomberg. “So far all the investments have been about making the systems more intelligent, this is the first time there’s been an investment in the other.”

How exactly will these projects accomplish that? They all take pieces of the puzzle. Three of the projects, at places like UC Berkeley and Oxford, will help AI systems learn what humans want by observing how we act, Android Authority reported. Two other projects focus on developing an ethical system for AI, and relatedly, teaching AI to explain its decisions to us.

Some of the projects also feel more immediate. One project seeks to establish a framework for keeping AI-powered weapons under “meaningful human control,” according to Android Authority. This is particularly relevant as advanced drone technology allows the US Navy to test things like autonomous ships to hunt enemy submarines.

SEE ALSO: When artificial intelligence turns scary

Join the conversation about this story »

NOW WATCH: How Elon Musk saved Tesla in just two weeks when Google was about to buy it

A robot killed a factory worker in Germany

$
0
0

VW Volkswagen factoryA robot killed a factory worker at a Volkswagen plant in Germany, FT reports. The 21-year-old worker was installing the robot when it struck him in the chest, crushing him against a plate. He died after the incident.

Prosecutors are investigating the worker’s death, but the internet has already begun to call it the first robot homicide.

A Volkswagen spokesman said this was not a robot that would work side by side with humans, and that it was meant to live in a safety cage — which the deceased worker was inside of during the installation, according to the FT report.

Twitter users were quick to point out that the name of the FT reporter, Sarah O’Connor, is remarkably similar to Sarah Connor, hero of the "Terminator" franchise. She was not amused by the barrage of tweets about “Skynet” and reminded them of the serious nature of the situation and that someone had died.

Screen Shot 2015 07 01 at 4.31.31 PM

The worker’s death comes at a time when some are questioning the safety of robots. The Future of Life Institute (FLI) , funded in part by a $10 million donation from Tesla CEO Elon Musk, announced its plans to use $7 million to fund various projects aimed at controlling artificial intelligence. One project wants to help keep AI-powered weapons under “meaningful human control.”

While the internet hysteria over this worker’s death is heavily comprised of dark comedy, many, such as Musk and Bill Gates, share concerns that the technology of robots is outstripping our regulation of them.

SEE ALSO: Here's how Elon Musk's $10 million donation will keep us safe from artificial intelligence

Join the conversation about this story »

NOW WATCH: This MIT robot competing at the DARPA Challenge can use a drill, open doors and even see

A machine is about to do to cancer treatment what ‘Deep Blue’ did to Garry Kasparov in chess

$
0
0

Garry Kasparov deep blue IBM chess artificial intelligence machine learning

When world chess champion Garry Kasparov first faced IBM's Deep Blue computer in 1996, he was certain he would win.

"I will beat the machine, whatever happens,"he recalled thinking, in a later film about the famous match up. "It's just a machine. Machines are stupid."

But much to Kasparov's horror, Deep Blue won the first game. And while Kasparov rallied to win the match, that would be the last year he emerged victorious. Starting in 1997, he couldn't seem to catch a break. Computers were officially better than humans at chess.

Later, Kasparov admitted that while he had played against computers in the past, it was during that first game in 1996 that he realized something was different about this canny machine. "I could feel — I could smell — a new kind of intelligence across the table," he said, according to a story in TIME.

Today, the problem of chess seems like an easy one for computers to solve. But that wasn't always the case; people were stunned by Deep Blue's victory. "Here was a machine better than humankind’s best at a game that depended as much on gut instinct as sheer calculation," wrote Jennifer Latson, in TIME.

Now, as IBM's newer and much more advanced artificial intelligence system, Watson, enters domains like doctoring that were long considered the sole purview of humans, the disbelief sounds much the same. "That’s the Jeopardy-playing computer — it’s not going to solve cancer," Norman Sharpless, an oncologist at the University of North Carolina, once said to a colleague, according to an account in Forbes.

Yet Watson is already working with doctors at MD Anderson to develop treatment plans for leukemia patients, a $50 million initiative described in a recent Washington Post article by Ariana Eunjung Cha. (The system is slated for use at a total of 14 cancer centers in the US and Canada.)

MD Anderson

At first, the computer seemed ill-suited to the task, which requires not only synthesizing a great deal of information but also using that information to make nuanced, tricky decisions. "When we first started, he was like a little baby," Tapan M. Kadia, an assistant professor in the leukemia department at MD Anderson, told Cha. "You would put in a diagnosis, and he would return a random treatment."

But just as Kasparov went from dismissive to humbled, the doctors at MD Anderson are finding that Watson is quickly proving its worth. The program they're using, officially called Oncology Expert Advisor, is getting better all the time. It still makes plenty of mistakes, but sometimes, Kadia said, it suggests a treatment and his reaction is, "'Oh my God, why didn’t I think of that?' We don’t like to admit it."

Patients aren't always comfortable with leaving their care to an algorithm. And cancer care is much higher-stakes than a chess match. But it seems that just as Deep Blue once mastered something that seemed inconceivable without a human brain, Watson is gearing up to do the same, becoming the next in a line of artificially intelligent machines that have made our confidence in the uniqueness of our human ability just a little bit wobblier. 

READ MORE: IBM's Watson computer can now do in a matter of minutes what it takes cancer doctors weeks to perform

SEE ALSO: IBM's Watson supercomputer may soon be the best doctor in the world

Join the conversation about this story »

NOW WATCH: The first computer programmer was a woman and the daughter of a famous poet

Google now lets people tranform their photos into trippy dreamscapes

$
0
0

Google AI image

Google has made its “inceptionism” algorithm available to all, allowing coders around the world to replicate the process the company used to create mesmerising dreamscapes with its image processing neural-network.

The system, which works by repeatedly feeding an image through an AI which enhances features it recognises, was first demonstrated by Google two weeks ago. It can alter an existing image to the extent that it looks like an acid trip, or begin with random noise to generate an entirely original dreamscape.

Now that the code for the system is public, users have been playing around with it – apparently competing to see who can make the most nightmarish images possible.

Running Larry Page through the system gives him a few too many mouths, for instance:

and a half-eaten donut is...MY GOD WHAT IS THAT? 

And there are cats in the sky: 

Running the already-freaky Philip K Dick adaptation A Scanner Darkly through the system ups the weird quota:

And in a double Philip K Dick reference: yes, Androids do dream of electric sheep. I know, I’ve made that joke before, I do not care, look at these freaky sheep: 

SEE ALSO: The 13 best Google Doodles

Join the conversation about this story »

NOW WATCH: This gorgeous trailer for Playstation's flagship game shows why millions are in love with the franchise


Mark Zuckerberg's vision of the future is full of artificial intelligence, telepathy, and virtual reality

$
0
0

mark zuckerberg

Mark Zuckerberg made some mind-bending predictions about the future at a Q&A he hosted on his Facebook page on Tuesday.

They may seem far-fetched now but some of the Facebook CEO's deas are shared by futurists and scientists alike.

Scientists at Facebook and elsewhere are working toward a future where artificial intelligence (AI), telepathy, and virtual reality are commonplace. In fact, some of that technology is already here.

In the future, Zuckerberg hopes we'll be able to:

Send your thoughts to another person

Almost all the technology being built today focuses on creating rich communication experiences, Zuckerberg said. As that technology improves, he foresees us bypassing smartphones and computers altogether, and speaking to each other using the power of our minds

"One day, I believe we'll be able to send full rich thoughts to each other directly using technology," he said. "You'll just be able to think of something and your friends will immediately be able to experience it too if you'd like. This would be the ultimate communication technology."

Zuckerberg isn't alone in asserting that telepathy will soon be commonplace. Ray Kurzweil, computer scientist and futurist who's made some outrageous predictions of his own, believes that we'll soon be able to connect our minds to the cloud, and communicate with the internet and others through the use of tiny DNA robots.

Today, scientists are working on technology that sends simple yes and no messages using skull caps that have external sensors and receivers. A person wearing one cap can nod their head or blink, which would be translated to a yes or no question, and sent via a magnetic coil affixed to the second person's head.

Have a computer describe images to you.

Zuckerberg posted a video of Yann LeCunn, director of Facebook's AI research, two weeks ago that revealed some of Zuckerberg's ideas about AI. LeCunn expounds on his work in computer vision, a subfield of AI that focuses on improving how computers perceive visual data like images and videos.

on Monday, June 15, 2015

This AI tech already exists, LeCunn said, in ATM machines to the facial recognition systems that allows Facebook users to tag friends in photos. Zuckerberg believes this work will eventually culminate into a computer that can view an image or a video and describe it in plain English.

He believes it's a technology that will be widely available in the near future, but the beginnings of it are available right now in computer science labs across the world.

"We're building systems that can recognize everything that's in an image or a video," Zuckerberg said. "This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them."

Use lasers to beam the internet from the sky

Making the internet, and more specifically Facebook, accessible to as many people on the globe remains one of Zuckerberg's primary concerns. Facebook has developed laser communications system that are attached to drones and essentially beam the internet down from the sky.

"As part of our Internet.org efforts, we're working on ways to use drones and satellites to connect the billion people who don't live in range of existing wireless networks," he said. "Our Connectivity Lab is developing a laser communications system that can beam data from the sky into communities. This will dramatically increase the speed of sending data over long distances."

Facebook lasers

According to Popular Science, Facebook has already rounded up test flights of the drone, which "reportedly have a larger wingspan than a Boeing 737 – 102 or 138 feet."

Be immune to diseases

In a rare encounter between a social media mogul and a physicist, Stephen Hawking asked Zuckerberg, "I would like to know a unified theory of gravity and the other forces. Which of the big questions in science would you like to know the answer to and why?"

Zuckerberg's response centered on his fascination with people and his hopes for medical advances that will essentially turn us all into Supermen.

"I'm most interested in questions about people," he said. "What will enable us to live forever? How do we cure all diseases? How does the brain work? How does learning work and how we can empower humans to learn a million times more?"

Once again, AI comes into the fray. Watson, the IBM machine that famously defeated world chess champion Garry Kasporov in 1996 and world Jeopardy champion Ken Jennings in 2011 is back — better than ever and this time in hospitals. Watson is assisting doctors at MD Anderson hospital with developing treatment plans for leukemia patients.

It's an industry that many AI scientists believe is ripe for change, and one where AI can make a real difference.

"I'm convinced that machine learning and deep learning are going to have a profound impact on how medical science is going to be in the future," Yoshua Bengio, AI scientist at the Universite de Montreal told Business Insider. "The natural machine learning thing to do is to consider millions and millions of people, and measure their symptoms and...connect the dots. Then be able to say 'given all the information we have for that particular person, their medical history, that's the best treatment.' That's called personalized treatment."

Live in a virtual reality world.

mark zuckerberg oculus asked about how the world will look from a technology and social media perspective, Zuckerberg answered, "We're working on VR because I think it's the next major computing and communication platform after phones...I think we'll also have glasses on our faces that can help us out throughout the day and give us the ability to share our experiences with those we love in completely immersive and new ways that aren't possible today."

Zuckerberg has a stake in making virtual reality a feasible device for every day users. Facebook purchased Oculus in 2014 for $2 billion, and it'll soon be available for consumers next year.

SEE ALSO: The 20 most creative paintings ever — according to a computer

Join the conversation about this story »

NOW WATCH: Scientists are astonished by these Goby fish that can climb 300-foot waterfalls

These are the research projects Elon Musk is funding to ensure A.I. doesn’t turn out evil

$
0
0

arnold schwarzenegger 1991 4x3 terminator

A group of scientists just got awarded $7 million to find ways to ensure artificial intelligence doesn't turn out evil.

The Boston-based Future of Life Institute (FLI), a nonprofit dedicated to mitigating existential risks to humanity, announced last week that 37 teams were being funded with the goal of keeping AI "robust and beneficial."

Most of that funding was donated by Elon Musk, the billionaire entrepreneur behind SpaceX and Tesla Motors. The remainder came from the nonprofit Open Philanthropy Project.

Musk is one of a growing cadre of technology leaders and scientists, including Stephen Hawking and Bill Gates, who believe that artificial intelligence poses an existential threat to humanity. In January, the Future of Life Institute released an open letter— signed by Musk, Hawking and dozens of big names in AI — calling for research on ways to keep AI beneficial and avoid potential "pitfalls." At the time, Musk pledged to give $10 million in support of the research.

The teams getting funded were selected from nearly 300 applicants to pursue projects in fields ranging from computer science to law to economics.

Here are a few of the most intriguing proposals:

Researchers at the University of California, Berkeley and the University of Oxford plan to develop algorithms that learn human preferences. That could help AI systems behave more like humans and less like rational machines. 

A team from Duke University plans to uses techniques from computer science, philosophy, and psychology to build an AI system with the ability to make moral judgments and decisions in realistic scenarios.

Nick Bostrom, Oxford University philosopher and author of the book "Superintelligence: Paths, Dangers, Strategies," wants to create a joint Oxford-Cambridge research center to create policies that would be enforced by governments, industry leaders, and others, to minimize risks and maximize benefit from AI in the long term.

Researchers at the University of Denver plan to develop ways to ensure humans don't lose control of robotic weapons — the plot of countless sci-fi films.

Researchers at Stanford University aim to address some of the limitations of existing AI programs, which may behave totally differently in the real world than under testing conditions.

Another researcher at Stanford wants to study what will happen when most of the economy is automated, a scenario that could lead to massive unemployment.

A team from the Machine Intelligence Research Institute plans to build toy models of powerful AI systems to see how they behave, much as early rocket pioneers built toy rockets to test them before the real thing existed.

Another Oxford researcher plans to develop a code of ethics for AI, much like the one used by the medical community to determine whether research should be funded.

Here's the full list of projects and descriptions.

SEE ALSO: Google: The artificial intelligence we're working on won't destroy humanity

SEE ALSO: A Chinese artificial intelligence program just beat humans in an IQ test

Join the conversation about this story »

NOW WATCH: WHERE ARE THEY NOW? The casts of the first two 'Terminator' films

Google’s AI system created some disturbing images after ‘watching’ the film Fear and Loathing in Las Vegas

$
0
0

google ai fear and loathing

With images like these, who needs drugs?

Google's artificial neural network, Deep Dream, is capable of producing some amazingly trippy images.

A typical neural network looks for features of an image that match a particular concept, like a banana. But the Google program turns that on its head, by  tweaking a random image to produce features that eventually resemble a banana. 

Recently, YouTube user Roelof Pieters had the bright idea of running the program on scenes from Fear and Loathing in Las Vegas, the 1998 cult classic which is basically Johnny Depp on one long acid trip.

The results, needless to say, were disturbing. Creepy animal heads with eyes like gaping orbs materialize in a darkened room.google ai fear and loathing

Google's image recognition systems are trained to look for specific objects, such as cats and dogs. But when the company's engineers fed in images of landscapes and static noise into Deep Dream, the AI spat out strangely artistic images, like this one featuring fountains and pagodas:Google AI dreams

Or this one, which looks like a "magic eye" pattern:Google AI dreams magic eyes

But applying the AI to a clip of Fear and Loathing takes the trippy to a whole new level.

See for yourself, if you dare:

 

SEE ALSO: These trippy images show how Google's AI sees the world

SEE ALSO: A Chinese artificial intelligence program just beat humans in an IQ test

Join the conversation about this story »

NOW WATCH: Here are all of Google's awesome science projects — that we know about

DARPA wants to build a personal assistant that can read your mind

$
0
0

apple siri zoe deschanel

While we are still waiting for Siri to get a little better, the Defense Advanced Research Projects Agency (DARPA) wants to go a step further — they want to build personal assistant machines that can anticipate your needs by reading your mind and body signals.

In a talk at June's DARPA Biology is Technology conference in New York City, DARPA program manager Justin Sanchez explained that the data from our smart watches and trackers actually ends up being pretty meaningless, since these devices can't put data in any sort of context or output a recommended action in response.

"Many of you are just getting things back like 'this is what your heart rate is right now' or 'you took 6,000 steps today,'" he said during a talk. "Who cares about that stuff? What you really want to do is use that information to help you interact with machines in a much deeper way ... today we don't typically aggregate those signals together and do something with it."

With the proliferation of these sensors in smart watches and trackers, Sanchez says it's the right time to develop a smart device that can read these mind and body signals, connect to an external device that makes sense of the information, and then use that information to anticipate what you need and make recommendations.

"We have the pieces," Sanchez told Business Insider. "These sensors are starting to be everywhere. Not only are they in the environment, they're also on our bodies ... we've got the computing power to take the information out of those sensors and we've got the mobile platforms so we have that interface at every step of our everyday lives."

For Sanchez, the possibilities of this technology would be endless.

He points to the Nest Learning Thermostat, which can make changes to the temperature throughout the day based on your past settings, as an example. The Nest Thermostat puts machine learning to practical use, ensuring that after a few days, you may never have to set the thermostat again. Machine learning is a subfield of AI science focused on taking past patterns and making predictions based on those patterns.

Sanchez imagines a "physiological computer" that can read body and mind signals like heart rate and temperature and be able to tell, if you're hot, cold, sleepy, frightened, or bored.

"You could interact with your environment, your architecture," Sanchez said. "Let's say you're having a low point in your day in terms of productivity so what if you had an interface that could say 'how about doing this? maybe this could spark your productivity?'"

Sanchez's talk was called "Brain-Machine Symbiosis," and he suggested that this seamless communication could one day happen directly between the brain and our devices through the use of implantable sensors.

But implants that can detect the brain's electrical signals still have a long way to go.

For one thing, scientists still have yet to design devices that can stay in the brain for a long time without causing damage or losing functionality — the body simply doesn't like them. Surgery to place implants is invasive. Surgeons drill small holes through the skull and "insert long thin electrodes" deep inside the brain,"according to a 2012 article in The Scientist.

The implants are often made of stainless steel or other types of metal — useful for conducting electric signals, but problematic for biological purposes.

"If you look at implanted electronics in the brain over the past 10 to 20 years, all suffer from a common problem which is the implant's electronic probes ... create scarring in brain tissue," said Charles Lieber, a chemist from Harvard University who is working on a tiny mesh brain implant.

brain implant 2When the body's immune system senses an implant, the brain's defense mechanism creates scar tissue around it to protect the brain. When a probe becomes too engulfed in glial scarring, it loses functionality.

But that doesn't stop Sanchez. However we get to this "brain-machine symbiosis," he's open to it.

"There are many different futures that can stem from what we and others are doing," he said. "There are a lot of technologies that could potentially get us there, which one is the right one? We can't say. We've just got to try."

Watch Justin Sanchez talk about his ideas at the conference, uploaded to YouTube by DARPAtv:

SEE ALSO: A new, game-changing technology can put electronics directly into the brain

Join the conversation about this story »

NOW WATCH: Mesmerizing underwater video of an egg being poached

Google released their AI dream code and turned the internet into an acid trip

$
0
0

Google AI Bosch

Google's artificial neural networks are supposed to be used for image recognition of specific objects like cars or dogs. Recently, Google's engineers turned the networks upside down and fed them random images and static in a process they called "inceptionism."

In return, they discovered their algorithms can turn almost anything into trippy images of knights with dog heads and pig-snails.

Now computer programmers across the internet are getting in on the "inceptionism" fun, after Google let their AI code run free on the internet. The open-source AI networks are available on GitHub for anyone with the know-how to download, use, and tweak.

Gathered under the Twitter hashtag #deepdream, the resulting images range from amusing to deeply disturbing. One user turned the already dystopian world of "Mad Max: Fury Road" into a car chase Salvador Dali could only dream of.

Mad Max's face is transformed into a many-eyed monster with the chin of a dog, while the guitar now spews out a dog fish instead of flames.

The AI networks are composed of "10 to 30 stacked layers of artificial neurons." On one end, the input layer is fed whatever image the user chooses. The lower layers look for basic features, like the edges of objects.

Higher levels look for more detailed features, and eventually the last layer makes a decision about what it's looking at.

These networks are usually trained with thousands of images depicting the object they're supposed to be looking for, whether it's bananas, towers, or cars.

Many of the networks are producing images depicting "puppy-slugs," a strange hybrid of dog faces and long, sluggish bodies. That's because those networks were trained to recognize dogs and other animals.

Here's what a galaxy would look like if it was made of dog heads.

"The network that you see most people on the hashtag [use] is a single network, it's a fairly large one," said Samim Winiger, a computer programmer and game developer. "And why you see so many similar 'puppyslugs' as we call them now, is it's one type of network we're dealing with in most cases. It's important to know there's many more out there."

Duncan Nicoll's half-eaten sprinkle donut was transformed into something much less appetizing once Google's AI was done with it.

Google AI donut slugAn intrepid user can emphasize particular features in an image, by running it through the network or even a single layer multiple times.

"Each layer has a unique representation of what [for example] a cat might look like," said Roelof Pieters, a data and AI scientist who rewrote the code for videos and ran a clip of "Fear and Loathing in Las Vegas" through the network.

"Some of these neurons in the neural network are primed toward dogs, so whenever there's something that looks like a dog, these neurons ... very actively prime themselves and say ahh, I see a dog. Let's make it like a dog."

Networks trained to search for faces and eyes created the most baffling images from seemingly innocuous photos.

The networks were also taught to look for inanimate objects like cars. Below, Winiger turned the National Security Agency headquarters into a black double-decker bus.

Many more images are beyond description. You'd have to see them yourself.

Winiger also tweaked the code for GIFs, which is available on GitHub. Here, a volcano spews dog heads into the atmosphere.

dog volcanoWith Winiger's help, I was able to test the network on a photo of myself drinking tea in an antique shop.

Guia picThis lower level on the AI network seems to be primed to search for holes and eyes, inadvertently adding dog faces in the background.

Guia google AI 1While this image produced by an upper layer looked for faces, pagodas, and birds. Notice the grumpy little man in what looks like a space suit appearing in the bottom right.

Guia google AI 2Winiger and Pieters both hope that the images from #deepdream will have people talking and learning about AI visual systems as they become more integrated into our daily lives.

"One of the things I find extremely important right now is to raise the debate and awareness of these systems," Winiger said. "We've been talking about computer literacy for 10 to 20 years now, but as intelligent systems are really starting to have an impact on society the debate lags behind. There's almost no better way than the pop culture approach to get the interest, at least, sparked."

SEE ALSO: These trippy images show how Google's AI sees the world

Join the conversation about this story »

NOW WATCH: We asked Siri the most existential question ever and she had a lot to say

Here's what the nightmarish images from Google's AI can actually teach us about the brain

$
0
0

Google AI dreams knight

You may have seen some of the "nightmarish"images generated by Google's aptly named Inceptionism project.

Here we have freakish fusions of dogs and knights (as in the image above), dumbbells with arms attached (see below) and a menagerie of Hieronymus Bosch-ian creatures:

Google AI dreams funny animalsBut these are more than just computerized curiosities. The process that generated these images can actually tell us a great deal about how our own minds process and categorize images – and what it is we have that computers still lack in this regard.

Digging deep

Artificial neural networks, or "deep learning", have enabled terrific progress in the field of machine learning, particularly in image classification.

Conventional approaches of machine learning typically relied on top-down rule-based programming, with explicit stipulation of what features particular objects had. They have also typically been inaccurate and error-prone.

An alternative approach is using artificial neural networks, which evolve bottom-up through experience. They typically have several interconnected information processing units, or neurons. A programmer weights each neuron with certain functions, and each function interprets information according to an assigned mathematical model telling it what to look for, whether that be edges, boundaries, frequency, shapes, etc.

The neurons send information throughout the network, creating layers of interpretation, eventually arriving at a conclusion about what is in the image.

Google's Inceptionism project tested the limits of its neural network's image recognition capacity. The Google research team trained the network by exposing it to millions of images and adjusting network parameters until the program delivered accurate classifications of the objects they depicted.

Then they turned the system on its head. Instead of feeding in a image – say, a banana – and having the neural network say what it is, they fed in random noise or an unrelated image, and had the network look for bananas. The resulting images are the network's "answers" to what it's learned.

noise bananaStarting with random noise, Google's artificial neural network found some bananas.

What it tells us about machine-learning

The results of the Inceptionism project aren't just curiosities. The psychedelic interpretations made by the program indicate that something is missing that is unique to information processing in biological systems. For example, the results show that the system is vulnerable to over-generalizing features of objects, as in the case of the dumbbell requiring an arm:

Google AI arm dumbbellsThis is similar to believing that cherries only occur atop ice cream sundaes. Because the neural network operates on correlation and probability (most dumbbells are going to be associated with arms), it lacks a capacity to distinguish contingency from necessity in forming stable concepts.

The project also shows that the over-reliance on feature detection leads to problems with the network's ability to identify probable co-occurrence. This results in a tendency towards over-interpretation, similar to how Rorschach tests reveal images, or inmates in Orange is the New Black see faces in toast.

Similarly, Google's neural network sees creatures in the sky, as with the strange creatures like the "Camel-Bird" and "Dog-Fish" above. It even picks up oddities within the Google homepage:

google AIA stable classification mechanism so far eludes deep learning networks. As described by the researchers at Google:

We actually understand surprisingly little of why certain models work and others don't. […] The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training.

What it tells us about ourselves

The Inceptionism project also tells us a little about how our own neural networks function. For humans like us, perceptual information about objects is integrated from various inputs, such as shape, color, size and so on, to then be transformed into a concept about that thing.

For example, a "cherry" is red, round, sweet and edible. And as you discover more things like a cherry, your neural network creates a category of things like cherries, or to which a cherry belongs, such as "fruit". Soon, you can picture a cherry without actually being in the presence of one, owing to your authority over what a cherry is like at the conceptual level.

Conceptual Organisation enables us to perceive drawings, photos and symbols of a cloud as referring to the same "cloud" concept, regardless of how much the cloud's features may suggest the appearance of Dog-Fish.

Google AI sky arrowIt also enables you to communicate about abstract objects, despite never having experienced them directly, such as unicorns.

One implication that arises from this research by Google is that simulating intelligence requires an additional organizational component beyond just consolidated feature detection. Yet it's still unclear how to successfully replicate this function within deep learning models.

While our experimental artificial neural networks are getting better at image recognition, we don't yet know how they work – just like we don't understand how our own brains work. But by continuing to test how artificial neural networks fail, we will learn more about them, and us. And perhaps generate some pretty pictures in the process.

Google AI dreams ibisJessica Birkett is PhD Candidate at University of Melbourne; Teaching Associate at Monash University.

This article was originally published on The Conversation. Read the original article.

UP NEXT: Google released their AI dream code and turned the internet into an acid trip

SEE ALSO: These trippy images show how Google's AI sees the world

Join the conversation about this story »

NOW WATCH: 5 scientifically proven ways to make someone fall in love with you

People are sending flowers and chocolate to thank personal assistant 'Amy Ingram' — what they don't realize is she's a robot

$
0
0

amy ingram

When former Havas CEO David Jones — now founder of his own brand technology network You & Mr Jones — pulled out his iPhone and offered to show me his amazing personal assistant "Amy Ingram," I thought our interview was about to take an uncomfortable turn.

But Jones wasn't trying to show me photos of his real-life PA. I was about to meet "Amy Ingram," a bafflingly human-like AI personal assistant, which schedules meetings for busy executives. All they have to do is e-mail amy@x.ai and "Amy" lets you and your contact know where and when to meet. 

It sounds like a fairly simple and easy-to-produce idea. But what makes Amy so remarkable is just how human-like her e-mails are.

Jones ran through how it works: You link your calendar, set your preferences — you prefer phone calls in the morning at these hours, these are the five places you like to have coffee, these are the three places you like to have lunch, and so on. If Jones wants to set up a meeting with someone, he copies them into an e-mail with Amy asking "can you fix a meeting for us?"

She immediately sets to work, asking Jones' contact which day he can make. The next thing Jones sees is a calendar invite. "She's now organized 55 meetings for me, seamlessly," Jones said. Thinking back to his days at Havas, Jones remarked how useful Amy would have been from a corporate perspective: Reducing the e-mail ping-pong that occurs when 500 people need to meet for a global meeting.

Amy hasn't always been foolproof. There was one occasion where Jones only gave her 15-minutes notice to organize a call. But Amy responded back: "I'm sorry I didn't manage to set up the meeting. If you didn't manage to speak to John, please let me know, and I will be happy to find a new time."

Jones told us: "This is a computer! It writes to you in an incredibly human way. You find yourself being polite and saying 'Dear Amy...' then saying 'why on earth am I writing that — it's a computer!'"

Amy's a popular lady: She's received flowers, chocolate and whiskey

Jones isn't the only person to have been caught off-guard by Amy's human-like behavior. Dennis Mortensen, the CEO and founder of the company behind Amy, X.ai., has some brilliant anecdotes of people being fooled into thinking Amy was a real PA.

"She has received flowers, chocolate, and whiskey at the office. She's been asked if she'll also be attending the meeting, pick up people in the lobby — and she just might have been flirted with a few times. At X.ai., we've invested heavily into and applied a great deal of effort in humanizing Amy. The primary reasons for this are that you shouldn't have to learn a specific syntax to use X.ai. and you should be able to communicate in the same language to your guests (humans) and your assistant (an AI,)" Mortensen told Business Insider.

X.ai. has a valuation of $40 million, having raised just over $11 million in funding to date. Mortensen says there are "thousands" of high profile CEOs like Jones in the system, but that X.ai.'s plan is also to democratize the idea of a personal assistant to everyone.

dennis mortensen"Think of the market size: 87 million knowledge workers in the US alone schedule upwards of 10 billion meetings a year ... you should think of X.ai. as similar to Dropbox, but for meeting scheduling. They save files for 400 million users — we want to schedule meetings for 400 million users. They have a free edition, so will we, they have a pro edition for $9, so will we," Mortensen said.

When you see how seamlessly Amy works, it immediately sparks your imagination as to what else X.ai. could plug the robot PA into: She could order you a car on Uber at just the right time, or book you a restaurant through OpenTable, or even send your guest an appropriate thank-you gift via Amazon.

But Mortensen isn't interested: "We schedule meetings. No more, no less! There's no interest for our team to fool around with other ideas and have a product that can do seven things kinda half-arsed. We want to be world class at one thing: X.ai. schedules meetings."

One question still lingers: As Amy's popularity grows, does Mortensen ever worry about putting real humans out of a job?

"I personally have a very optimistic view of the future and I honestly believe that we’ll all be better off as machines take on repetitive non-thinking tasks like e-mail ping pong to set up a meeting. If I had a human assistant, I would certainly rather see him pick up my guests in the lobby and make sure they feel welcome, than fiddle around in Outlook all day trying to manage my 15 weekly meetings. This is not just a fantasy, and we have a whole set of current users with human assistants who are actively using Amy," Mortensen responded.

SEE ALSO: Here's an interesting theory on how Facebook could be more valuable than Google in just three years

SEE ALSO: These Are The 16 Hottest Startups That Launched In 2014

Join the conversation about this story »

NOW WATCH: 8 things you should never say in a job interview


How artificial intelligence will make intrusive technology disappear

$
0
0

robots

Last March, I was in Costa Rica with my girlfriend, spending our days between beautiful beaches and jungles full of exotic animals. There was barely any connectivity and we were immersed in nature in a way that we could never be in a big city. It felt great.

But in the evening, when we got back to the hotel and connected to the WiFi, our phones would immediately start pushing an entire day’s worth of notifications, constantly interrupting our special time together. It interrupted us while watching the sunset, while sipping a cocktail, while having dinner, while having an intimate moment. It took emotional time away from us.

And it’s not just that our phones vibrated, it’s also that we kept checking them to see if we had received anything, as if we had some sort of compulsive addiction to it. Those rare messages that are highly rewarding, like being notified that Ashton Kutcher just tweeted this article, made consciously “unplugging” impossible.

Just like Pavlov’s dog before us, we had become conditioned. In this case though, it has gotten so out of control that today, 9 out of 10 people experience “phantom vibrations”, which is when you think your phone vibrated in your pocket, whereas in fact it didn’t. 

The Digital Revolution

How did this happen?

Back in 1990, we didn’t have any connected devices. This was the “unplugged” era. There were no push notifications, no interruptions, nada. Things were analog, things were human.

Around 1995, the Internet started taking off, and our computers became connected. With it came email, and the infamous “you’ve got mail!” notification. We started getting interrupted by people, companies and spammers sending us electronic messages at random moments.

10 years later, we entered the mobile era. This time, it is not 1, but 3 devices that are connected: a computer, a phone, and a tablet. The trouble is that since these devices don’t know which one you are currently using, the default strategy has been to push all notifications on all devices. Like when someone calls you on your phone, and it also rings on your computer, and actually keeps ringing after you’ve answered it on one of your devices! And it’s not just notifications; accessing a service and finding content is equally frustrating on mobile devices, with those millions of apps and tiny keyboards.

If we take notifications and the need for explicit interactions as a proxy for technological friction, then each connected device adds more of it. Unfortunately, this is about to get much worse, since the number of connected devices is increasing exponentially!

rand

This year, in 2015, we are officially entering what is called the “Internet of Things” era. That’s when your watch, fridge, car and lamps are connected. It is expected that there will be more than 100 billion connected devices by 2025, or 14 for every person on this planet. Just imagine what it will feel like to interact manually and receive notifications simultaneously on 14 devices. That’s definitely not the future we were promised!

There is hope though. There is hope that Artificial Intelligence will fix this. Not the one Elon Musk refers to that will enslave us all, but rather a human-centric domain of A.I. called “Context-Awareness”, which is about giving devices the ability to adapt to our current situation. It’s about figuring out which device to push notifications on. It’s about figuring out you are late for a meeting and notifying people for you. It’s about figuring out you are on a date and deactivating your non-urgent notifications. It’s about giving you back the freedom to experience the real world again.

When you look at the trend in the capabilities of A.I., what you see it that it takes a bit longer to start, but when it does, it grows much faster. We already have A.I.s that can learn to play video games and beat world champions, so it’s just a matter of time before they reach human level intelligence. There is an inflexion point, and we just crossed it.

rand1

Taking the connected devices curve, and subtracting the one for A.I., we see that the overall friction keeps increasing over the next few years until the point where A.I. becomes so capable that this friction flips around and quickly disappears. In this era, called “Ubiquitous Computing”, adding new connected devices does not add friction, it actually adds value!

rand2

For example, our phones and computers will be smart enough to know where to route the notifications. Our cars will drive themselves, already knowing the destination. Our beds will be monitoring our sleep, and anticipating when we will be waking up so that we have freshly brewed coffee ready in the kitchen. It will also connect with the accelerometers in our phones and the electricity sockets to determine how many people are in the bed, and adjust accordingly. Our alarm clocks won’t need to be set; they will be connected to our calendars and beds to determine when we fell asleep and when we need to wake up.

All of this can also be aggregated, offering public transport operators access to predicted passenger flows so that there are always enough trains running. Traffic lights will adjust based on self-driving cars’ planned route. Power plants will produce just enough electricity, saving costs and the environment. Smart cities, smart homes, smart grids.. They are all just consequences of having ubiquitous computing!

By the time this happens, technology will have become so deeply integrated in our lives and ourselves that we simply won’t notice it anymore. Artificial Intelligence will have made technology disappear from our consciousness, and the world will feel unplugged again.

I know this sounds crazy, but there are historical examples of other technologies that followed a similar pattern. For example, back in the 1800s, electricity was very tangible. It was expensive, hard to produce, would cut all the time, and was dangerous. You would get electrocuted and your house could catch fire. Back then, people actually believed that oil lamps were safer!

But as electricity matured, it became cheaper, more reliable, and safer. Eventually, it was everywhere, in our walls, lamps, car, phone, and body. It became ubiquitous, and we stopped noticing it. Today, the exact same thing is happening with connected devices.

Context-Awareness

Building this ubiquitous computing future relies on giving devices the ability to sense and react to the current context, which is called “context-awareness”.

A good way to think about it is through the combination of 4 layers: the device layer, which is about making devices talk to each other; the individual layer, which encompasses everything related to a particular person, such as his location history, calendar, emails or health records; the social layer, which models the relationship between individuals, and finally the environmental layer, which is everything else, such as the weather, the buildings, the streets, trees and cars.

For example, to model the social layer, we can look at the emails that were sent and received by someone, which gives us an indication of social connection strength between a group of people.

rand3

The graph shown above is extracted from my professional email account using the MIT Immersion tool, over a period of 6 months. The huge green bubble is one of my co-founder (which sends way too many emails!), as is the red bubble. The other fairly large ones are other people in my team that I work closely with. But what’s interesting is that we can also see who in my network works together, as they will tend to be included together in emails threads and thus form clusters in this graph. If you add some contextual information such as the activity I was engaged in, or the type of language being used in the email, you can determine the nature of the relationship I have with each person (personal, professional, intimate, ..) as well as its degree. And if you now take the difference in these patterns over time, you can detect major events, such as changing jobs, closing an investment round, launching a new product or hiring key people! Of course, all this can be done on social graphs as well as professional ones.

Now that we have a better representation of someone’s social connections, we can use it to perform better natural language processing (NLP) of calendar events by disambiguating events like “Chat with Michael”, which would then assign a higher probability to my co-founder.

But a calendar won’t help us figure out habits such as going to the gym after work, or hanging out in a specific neighborhood on Friday evenings. For that, we need another source of data: geolocation. By monitoring our location over time and detecting the places we have been to, we can understand our habits, and thus, predict what we will be doing next. In fact, knowing the exact place we are at is essential to predict our intentions, since most of the things we do with our devices are based on what we are doing in the real world.

rand4

Unfortunately, location is very noisy, and we never know exactly where someone is. For example below, I was having lunch in San Francisco, and this is what my phone recorded while I was not moving. Clearly it is impossible to know where I actually am!

rand5

To circumvent this problem, we can score each place according to the current context. For example, we are more likely to be at a restaurant during lunch time than at a nightclub. If we then combine this with a user-specific model based on their location history, we can achieve very high levels of accuracy. For example, if I have been to a Starbucks in the past, it will increase the probability that I am there now, as well as the probability of any other coffee shop.

And because we now know that I am in a restaurant, my devices can surface the apps and information that are relevant to this particular place, such as reviews or mobile payments apps accepted there. If I was at the gym, it would be my sports apps. If I was home, it would be my leisure and home automation apps.

If we combine this timeline of places with the phone’s accelerometer patterns, we can then determine the transportation mode that was taken between those places. With this, our connected watches could now tell us to stand up when it detects we are still, stop at a rest area when it detects we are driving, or tell us where the closest bike stand is when cycling!

These individual transit patterns can then be aggregated over several thousand users to recreate very precise population flow in the city’s infrastructure, as we have done below for Paris.

rand6

Not only does it give us an indication of how many people transit in each station, it also give us the route they have been taking, where they changed train or if they walked between stations. Combining this with data from the city — concerts, office and residential buildings, population demographics, … — enables you to see how each factor impacts public transport, and even predict how many people will be boarding trains throughout the day. It can then be used to notify commuters that they should take a different train if they want to sit on their way home, and dynamically adjust the train schedules, maximizing the efficiency of the network both in terms of energy saved and comfort.

And it’s not just public transport. The same model and data can be used to predict queues in post offices, by taking into account hyperlocal factors such as when the welfare checks are being paid, the bank holidays, the proximity of other post offices and the staff strikes. This is shown below, where the blue curve is the real load, and the orange one is the predicted load.

rand7

This model can be used to notify people of the best time to drop and pickup their parcels, which results in better yield management and customer service. It can also be used to plan the construction of new post offices, by sizing them accordingly. And since a post office is just a retail store, everything that works here can work for all retailers: grocery stores, supermarkets, shoe shops, etc.. It could then be plugged into our devices, enabling them to optimize our shopping schedule and make sure we never queue again!

This contextual modeling approach is in fact so powerful that it can even predict the risk of car accidents just by looking at features such as the street topologies, the proximity of bars that just closed, the road surface or the weather. Since these features are generalizable throughout the city, we can make predictions even in places where there was never a car accident!

rand8

For example here, we can see that our model correctly detects Trafalgar square as being dangerous, even though nowhere did we explicitly say so. It discovered it automatically from the data itself. It was even able to identify the impact of cultural events, such as St Patrick’s day or New Year’s Eve! How cool would it be if our self-driving cars could take this into account?

If we combine all these different layers — personal, social, environmental — we can recreate a highly contextualized timeline of what we have been doing throughout the day, which in turn enables us to predict what our intentions are.

rand9

Making our devices able to figure out our current context and predict our intentions is the key to building truly intelligent products. With that in mind, our team has been prototyping a new kind of smartphone interface, one that leverages this contextual intelligence to anticipate which services and apps are needed at any given time, linking directly to the relevant content inside them. It’s not yet perfect, but it’s a first step towards our long term vision — and it certainly saves a lot of time, swipes and taps!

One thing in particular that we are really proud of is that we were able to build privacy by design (full post coming soon!). It is a tremendous engineering challenge, but we are now running all our algorithms directly on the device. Whether it’s the machine learning classifiers, the signal processing, the natural language processing or the email mining, they are all confined to our smartphones, and never uploaded to our servers. Basically, it means we can now harness the full power of A.I. without compromising our privacy, something that has never been achieved before.

It’s important to understand that this is not just about building some cool tech or the next viral app. Nor is it about making our future look like a science-fiction movie. It’s actually about making technology disappear into the background, so that we can regain the freedom to spend quality time with the people we care about. 

This post Came from Rand Hindi.

SEE ALSO: Why advertisers and publishers are in an uproar over Apple's latest move

Join the conversation about this story »

NOW WATCH: How Elon Musk can tell if job applicants are lying about their experience

Here's what we know about 'Cortex,' Twitter's new artificial intelligence group focused on understanding content (TWTR)

$
0
0

JackD2

Twitter is ramping up its artificial intelligence efforts, hunting for experts to fill out a new team called “Cortex.” 

A couple of recent job postings provide some clues about Twitter’s new Cortex group, which could help the company better personalize its service for its 300 million users and keep up with Google and Facebook in the industry’s escalating AI race.

Twitter is looking to enlist architecture and systems software engineers that will work on “Deep Learning,” a specialized branch of artificial intelligence that's much in vogue among Internet companies these days. The job postings specify that the positions are based in New York, which is where Madbits, an AI startup that Twitter acquired last year, is based. 

A goal of Cortex appears to be “automatic content understanding.” A job listing explains that Twitter basically building the “backbone” of its learning systems, which are intended to automatically label the flood of disparate content that users publish on its social network.

Here’s how Twitter explains in the job listing why it needs artificial intelligence and how it views the new Cotex team:

Twitter is a unique source of real-time information, offering amazing opportunities for automatic content understanding. The format of this content is diverse (tweets, photos, videos, music, hyperlinks, follow graph, ...), the distribution of topics ever-changing (on a weekly, daily, or sometimes hourly basis), and the volume ever-growing; making it very challenging to automatically and continuously expose relevant content. Manually defining features to represent this data is showing its limits.

Our team, Twitter Cortex, is responsible for building the representation layer for all this content. As an architecture engineer at Twitter Cortex, you will help us build, scale and maintain the backbone of our online learning systems, and directly impact the lives of our users and the success of our business.

AI is an increasingly important area of focus at many Internet companies. Google acquired DeepMind in 2014 and has been doing a lot of work with artificial neural networks. Facebook CEO Mark Zuckerberg has flagged AI as one of the company’s key initiatives

Twitter may not have the resources of Google or Facebook, but the Cortex group shows Twitter has big plans for AI too. For Twitter, artificial intelligence capabilities could help create powerful new features and products that could revive the service's stalling user growth. 

In a report in Wired earlier this month, Twitter engineering director Alex Roetter provided a few details about Cortex. 

Twitter's initial AI efforts were aimed at identifying porn and other objectionable material on the site, Wired reports. That allowed Twitter to identify, and when necessary remove, objectionable material faster and at a lower cost than by employing armies of humans to pore over the content.

Twitter is now creating a broader AI operation that could be used for everything from helping Twitter better match its users with relevant tweets and people to follow. Twitter’s Cortex is already focusing on the company’s advertising system and will eventually analyze the entire “corpus of tweets,” Wired reports. 

SEE ALSO: The dark side of Google's focus on massive world-changing projects

Join the conversation about this story »

NOW WATCH: 5 scientifically proven ways to make someone fall in love with you

One of Mark Zuckerberg's mind-blowing predictions about the future already exists

$
0
0

Mark Zuckerberg

From telepathy to total immunity from disease, Facebook CEO Mark Zuckerberg didn't shy away from bold predictions about the future during a Q&A on his Facebook profile on June 30.

But one of Zuckerberg's technology dreams is on the verge of coming true: a computer that can describe images in plain English to users.

Zuckerberg thinks this machine could have profound changes on how people, especially the vision-impaired, interact with their computers.

"If we could build computers that could understand what's in an image and could tell a blind person who otherwise couldn't see that image, that would be pretty amazing as well," Zuckerberg wrote. "This is all within our reach and I hope we can deliver it in the next 10 years."

In the past year, teams from the University of Toronto and Universite de Montreal, Stanford University, and Google have been making headway in creating artificial intelligence programs that can look at an image, decide what's important, and accurately, clearly describe it.

This development builds on image recognition algorithms that are already widely available, like Google Images and other facial recognition software. It's just taking it one step further. Not only does it recognize objects, it can put that object into the context of its surroundings.

"The most impressive thing I've seen recently is the ability of these deep learning systems to understand an image and produce a sentence that describes it in natural language," said Yoshua Bengio, an artificial intelligence (AI) researcher from the Universite de Montreal. Bengio and his colleagues, recently developed a machine that could observe and describe images. They presented their findings last week at the International Conference on Machine Learning.

"This is something that has been done in parallel in a bunch of labs in the last less than a year," Bengio told Business Insider. "It started last Fall and we've seen papers coming out, probably more than 10 papers, since then from all the major deep learning labs, including mine. It's really impressive."

Bengio's program could describe images in fairly accurate detail, generating a sentence by looking at the most relevant areas in the image.

computer image descriptionIt might not sound like a revolutionary undertaking. A young child can describe the pictures above easily enough.

But doing this actually involves several cognitive skills that the child has learned: seeing an object, recognizing what it is, realizing how it's interacting with other objects in its surroundings, and coherently describing what she's seeing.

That's a tough ask for AI because it combines vision and natural language, different specializations within AI research.

It also involves knowing what to focus on. Babies begin to recognize colors and focus on small objects by the time they are five months old, according to the American Optometric Association. Looking at the image above, young children know to focus on the little girl, the main character of that image. But AI doesn't necessarily come with this prior knowledge, especially when it comes to flat, 2D images. The main character could be anywhere within the frame.

In order for the machines to identify the main character, researchers train them with thousands of images. Bengio and his colleagues trained their program with 120,000 images. Their AI could recognize the object in the image, while simultaneously focusing on relevant sections of the image while in the process of describing it.

computer describes image"As the machine develops one word after another in the English sentence, for each word, it chooses to look in different places of the image that it finds to be most relevant in deciding what the next word should be," Bengio said. "We can visualize that. We can see for each word in the sentence where it's looking in the image. It's looking where you would imagine ... just like a human would."

According to Scientific American, the system's responses "could be mistaken for that of a human" about 70% of the time.

However, it seemed to falter when the program focused on the wrong thing, when the images had multiple figures or if the images were more visually complicated. In the batch of images below, the program describes the top middle image, for example, as "a woman holding a clock in her hand" because it mistook the logo on her shirt for a clock.

The program also misclassified objects it got right in other images. For example, the giraffes in the upper right picture were mistaken for "a large white bird standing in a forest."

Computer description errorsIt did however, correctly predict what every man and woman wants: to sit at a table with a large pizza.

SEE ALSO: These trippy images show how Google's AI sees the world

Join the conversation about this story »

NOW WATCH: We tried cryotherapy — the super-cold treatment LeBron James swears by

Here’s why it’s so hard to make a funny robot

$
0
0

new yorker racism cartoon

Artificial intelligence can do a lot of things, like recognizing your face or identifying good art.

But it still can't tell a joke.

And though humor has a reputation for being subjective and changing over time, researchers have long suspected that there might be a few basic elements of humor.

Researchers from the University of Michigan, Columbia University, Yahoo! Labs and the New Yorker have teamed up, channeling the power of big data to help identify some of those building blocks of humor. The study was published in the online journal arxiv.

Every week, the New Yorker runs a caption contest in the back of the magazine. 5,000 people enter submissions, then readers vote on which of the top three is the best fit for the cartoon. That has created an enormous database of captions—more than two million captions for more than 400 cartoons in the past decade.

The researchers wanted to use this data to create an algorithm that can distinguish the funny and unfunny captions. The researchers picked 50 cartoons and the 300,000 captions that corresponded to them and used a program to analyze and rank the captions linguistically, looking at whether the captions were about people or dealt with positive or negative emotions. They also created a different ranking using network theory, which connected the topics mentioned in each. Then they got real people to weigh in on the humor, asking seven users on Amazon's Mechanical Turk to rank which of two caption options is funnier.

They found a few trends when they compared all the rankings to one another. "We found that the methods that consistently select funnier captions are negative sentiment, human-centeredness, and lexical centrality," they write.

Those conclusions aren't that surprising, given the New Yorker's readership, they note. But it's hard to know what these conclusions mean for people trying to develop a funny robot. As MIT Tech Review writes:

It's easy to imagine that one goal from this kind of work would be to create a machine capable of automatically choosing the best caption from thousands entered into the New Yorker competition each week. But the teams seem as far as ever from achieving this. Did any of these automatic methods reliably pick the caption chosen by readers? Radev and co. do not say, so presumably not.

This kind of work in itself won't help a functional joke-writing machine. But the team is also releasing the cartoon data they used to the public. So, as MIT Tech Review notes, "if there's anybody out there who thinks they can do better, they're welcome to try."

This article originally appeared on Popular Science.

This article was written by Alexandra Ossola from Popular Science and was legally licensed through the NewsCred publisher network.

SEE ALSO: Google’s AI system created some disturbing images after ‘watching’ the film Fear and Loathing in Las Vegas

NOW READ: Scientists discovered what makes something funny

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

How Google and cats rekindled the research into Artificial Intelligence (GOOG)

$
0
0

cat computer hacker mouse

Artificial Intelligence may seem like a buzzword now, but for decades it wasn't considered all that interesting by those in the field.

In fact, though there were a few scientists working tirelessly for decades on neural network projects, researching ways to train computers to behave similarly to humans, there was period of time known as "AI Winter," where scientific interest in the topic all but withered away.

But, according to a new long-form article in Re/code recounting the long and bizarre history of AI, in 2012 something happened to rekindle interest in the idea that computers too can make decisions like people — and cats were involved.

Re/code's Mark Bergen and Kurt Wagner write that in 2012 Google had what was called a "Brain" team, whose entire project was "to build the largest artificial neural network, an AI brain." And, one day, this team decided to see how its artificial brain interpreted YouTube videos.

YouTube — being a bastion of all things feline related — undoubtedly had thousands of videos of cats. And this Google Brain, when it was fed millions of videos, was able to deduce which videos had cats in them "without input on feline features."

In short, this random AI exercise proved that computers can learn without being given the precise data of what something is.

This project garnered a lot of publicity, according to Re/code. And it also showed universities the potential for similar AI projects. Even more, it gave Google fodder to further invest in AI, making it one of the leading tech companies investing in cutting-edge artificial intelligence programs.

And other companies have since followed Google’s lead.

Now AI has become a truly hot topic for both companies and university researchers. Places like Google and Facebook are now paying top dollar for experts in the field. And all this is, in part, thanks to a fake brain learning on its own what cats are.

You can read Re/code’s entire history of the leading experts in the AI field here.

SEE ALSO: Experts believe China is building a 'Facebook of human intelligence'

Join the conversation about this story »

NOW WATCH: People doing backflips on a two-inch wide strap is a real sport called slacklining

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>