Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Scientists built a computer that can beat you at classic Atari games

$
0
0

Google DeepMind Artificial intelligence AI

Computers have already beaten humans at chess and "Jeopardy!," and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature.

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine.

But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM's Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show "Jeopardy!" in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said.

garry kasparov deep blue ibm chessIn contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on "deep learning," a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the "deep Q-network," or DQN, and it runs on a regular desktop computer.

Playing games

The researchers tested DQN on 49 classic Atari 2600 games, such as "Pong" and "Space Invaders." The only pieces of information about the game that the program received were the pixels on the screen and the game score.

"The system learns to play by essentially pressing keys randomly" in order to achieve a high score, study co-author Volodymyr Mnih, also a research scientist at Google DeepMind, said at the news conference.

After a couple weeks of training, DQN performed as well as professional human gamers on many of the games, which ranged from side-scrolling shooters to 3D car-racing games, the researchers said. The AI program scored 75 percent of the human score on more than half of the games, they added.

Sometimes, DQN discovered game strategies that the researchers hadn't even thought of — for example, in the game "Seaquest," the player controls a submarine and must avoid, collect or destroy objects at different depths.

The AI program discovered it could stay alive by simply keeping the submarine just below the surface, the researchers said.

More complex tasks

DQN also made use of another feature of human brains: the ability to remember past experiences and replay them in order to guide actions (a process that occurs in a seahorse-shaped brain region called the hippocampus).

Similarly, DQN stored "memories" from its experiences, and fed these back into its decision-making process during gameplay.

atariBut human brains don't remember all experiences the same way. They're biased to remember more emotionally charged events, which are likely to be more important.

Future versions of DQN should incorporate this kind of biased memory, the researchers said.

Now that their program has mastered Atari games, the scientists are starting to test it on more complex games from the '90s, such as 3D racing games. "Ultimately, if this algorithm can race a car in racing games, with a few extra tweaks, it should be able to drive a real car," Hassabis said.

In addition, future versions of the AI program might be able to do things such as plan a trip to Europe, booking all the flights and hotels. But "we're most excited about using AI to help us do science," Hassabis said.

Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: Afraid Of AI? Here's Why You Shouldn't Be

Join the conversation about this story »

NOW WATCH: 11 Video Games From The 1980s That Are Better Than Games Today


Here’s how close we are to the AI robot in the new movie ‘Chappie’

$
0
0

chappie and deon

The new film "Chappie" features an artificially intelligent robot that becomes sentient and must learn to navigate the competing forces of kindness and corruption in a human world.

Directed by Neill Blomkamp, whose previous work includes "District 9" and "Elysium," the film takes place in the South African city of Johannesburg. The movie's events occur in a speculative present when the city has deployed a force of police robots to fight crime. One of these robots, named "Chappie," receives an upgrade that makes him sentient.

Blomkamp said his view of artificial intelligence (AI) changed over the course of making the film, which opens in the United States on Friday (March 6). "I'm not actually sure that humans are going to be capable of giving birth to AI in the way that films fictionalize it," he said in a news conference. [Super-Intelligent Machines: 7 Robotic Futures]

Yet, while today's technology isn't quite at the level of that in the film, "We definitely have had major aspects of systems like Chappie already in existence for quite a while," said Wolfgang Fink, a physicist and AI expert at Caltech and the University of Arizona, who did not advise on the film.

Chappie in real life?

Existing AI computer systems modeled on the human brain, known as artificial neural networks, are capable of learning from experience, just like Chappie does in the film, Fink said. "When we expose them to certain data, they can learn rules, and they can even learn behaviors," he said. Today's AI can even teach itself to play video games.

Something akin to Chappie's physical hardware also exists. Google-owned robotics company Boston Dynamics, based in Waltham, Massachusetts, has an anthropomorphic bipedal robot, called PETMAN, that can walk, bend and perform other movements on its own. And carmaker Honda has ASIMO, a sophisticated humanoid robot that once played soccer with President Barack Obama.

But Chappie goes beyond what current systems can do, because he becomes self-aware. There's a moment during the film when he says, "I am Chappie."

"That statement, if that's truly result of a reasoning process and not trained, that is huge," Fink said. An advance like that would mean robots could go beyond being able to play a video game or execute a task better than a human. The machine would be able to discriminate between self and nonself, which is a "key quality of any truly autonomous system," Fink said.

Childlike persona

As opposed to the "Terminator"-style killing machines of most Hollywood AI films, Chappie's persona is depicted as childlike and innocent — even cute.

To create Chappie, actor Sharlto Copley performed the part, and a team of animators "painted" the computer-generated robot over his performance, said visual effects supervisor Chris Harvey.

"We still had Sharlto on set [as Chappie]," Harvey told Live Science. But unlike many other special-effects-heavy films, "Chappie" did not use motion capture, which involves an actor wearing a special suit with reflective markers attached and having cameras capture the performer's movements. Instead, "the animators did that by hand," Harvey said.

Because Chappie is a robot, Harvey's biggest fear was not being able to have it convey emotion. So, his team gave Chappie an expressive pair of "ears" (antennae), a brow bar and a chin bar, which could express a fairly wide range of emotions, "almost like a puppy dog," Harvey said.

Humanity's biggest threat

In the film, Chappie's "humanity" is sharply contrasted with the inhumanity of Hugh Jackman's character Vincent Moore, a former military engineer who is developing a massive, brain-controlled robot called the "Moose" to rival intelligent 'bots like Chappie.

"The original concept for Jackman's character was always to be in opposition to artificial intelligence," Blomkamp told reporters.

Jackman himself takes a more positive view of AI. "Unlike my character, I like to think optimistically about these discoveries," Jackman said in a news conference. "I'm a firm believer that the pull for human beings is toward the good generally outweighing the bad."

But billionaire Elon Musk and famed astrophysicist Stephen Hawking have sounded alarms about the dangers of artificial intelligence, with Musk calling it humanity's "biggest existential threat."

Truly autonomous AI is not something most researchers are working on, but Fink shares some of these concerns.

"Depending on how old we are, we might see something in our lifetime which might become scary," Fink said. If it gets out of control, he said, "then we have created a monster."

Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: If the zombie apocalypse happens, scientists say you should head for the hills

Join the conversation about this story »

NOW WATCH: Scientists Discovered What Actually Wiped Out The Mayan Civilization

Critics are wrong — here's why 'Chappie' is incredibly underrated

$
0
0

Chappie Neill Blomkamp movie still Sony Columbia pictures

Fans of "District 9" will have no trouble recognizing Neill Blomkamp's footprint in "Chappie," the Canadian-South African director's third feature film about a titanium police droid that gains consciousness thanks to a big software update.

Blomkamp's style is felt in a story that bounces among unlikely heroes, richer humor than what you'd get in the often lowest-common denominator chuckles of a Marvel flick, and a plot line of stakes that just go up and up.

That's all very parallel to what we saw in Blomkamp's directorial debut in 2009's "District 9."

Despite some pretty negative reviews, and an underwhelming $13.3 million opening at the box office, "Chappie" is a movie that could easily be enjoyed a second time on the big screen.

A few plot details follow, but nothing too heavy in spoilers!

Once a ubiquitous member of the cop-robot force that helped rein in scary homicide rates in Johannesburg, Unit 022 is damaged and labeled fit for the scrapyard.

That's until his designer, an engineer named Deon (Dev Patel) who moonlights in Red Bull-fueled attempts at designing true AI, installs his latest software attempt into his head.

It works. Unit 022 becomes Chappie (voiced and motion-acted by Sharlto Copley), essentially a child with a hyper-capable body and a blistering learning pace. Vulture calls Chappie a robotic version of the widely-hated Jar Jar Binks. And sure, there's some validity in that — from the character's odd English and his bodily dimensions to his nervous traits.

But Chappie won't annoy you like Jar Jar did the masses of "Star Wars" fans. One early scene is actually pretty heart-wrenching, as Chappie is pushed into homelessness by stewards eager to toughen him up.

Unfortunately for him, Chappie is a hero lost among anti-heroes (balanced against a few villains). Chappie's malleability is used by a trio of bad but not totally rotten gangsters — they're in falsified debt to a ruthless warlord (Brandon Auret).

Chappie movie still Sony Columbia Pictures robot actionThe gangsters are played by rap duo Die Antwoord's Yolandi Visser and Watkin Tudor Jones, alongside Jose Pablo Cantillo. Apparently the rappers weren't the easiest to get along with on the set. A South African publication reported on Jones' backseat directing, and heard from anonymous sources on the set that Blomkamp himself said "I don’t ever want to be in the same room as him again."

Too bad they won't work with Blomkamp again. Tudor Jones and Visser are a bright spot in a cast of more established names that don't stand out themselves.

They have the benefit of bringing their real-life physiques to the set, and even spray-paint a few decals from some of their albums onto Chappie's bodywork (not a bad product placement). The gangsters try to mold Chappie into an unbeatable asset for high crime, though Yolandi Visser's character is just as happy reading him a book at night.

Hugh Jackman plays Vincent Moore, a frustrated meathead smart enough to have engineered his own robotic weapon (the mind-controlled "Moose"), but not quite smart enough to see why Deon's versatile robots have performed better with the Johannesburg police's budget allocators. When he's not causing problems for Chappie and the gang, Jackman's character fumes at his desk, wringing his hands into a rugby ball.

Even Chappie's maker, Deon, doesn't have the best instincts as he's kept in thrall by the three gangsters, who in a limited way, have come to care for Chappie beyond his ability to pack muscle.

Here's a scene of them interacting with the robot:

Much of the film's humor arises from the dissonance between Chappie's unmatched ability to fight while remaining so child-like. Soon enough, the gangster's den starts to resemble an unlikely but recognizable, almost loving home for Chappie's accelerated boyhood.

Like any machine, he'll take order inputs to an extreme that humans would implicitly understand as not exactly what was meant.

Finally, "Chappie" keeps driving to greater and greater stakes. The gangsters might be in it for Chappie's criminal potential, but that's soon overtaken by the world-shifting implications of bona fide artificial intelligence — a machine that learns, feels, fears, and longs to survive. Just like the bumbling protagonist Blomkamp's hit "District 9," the characters in "Chappie" are lost in something a lot greater than them.

Overall, "Chappie" is a solid action flick with a plot spine strong enough to string together the gunshow set-pieces, which come quickly enough. Blomkamp keeps the same mind-blowing contrast between futuristic weaponry and gritty urban settings we enjoyed on our last tour of near-future Johannesburg with "District 9."

The ending raises a few questions — some of them on the nature of AI, others, less appealingly, about the plausibility of the last few scenes, which we won't spoil here.

Perhaps one of the biggest questions the film posits is what happens when AI is smart enough to do more than it was designed for?

It's a question a few films this year will focus on from British thriller "Ex Machina" to the highly-anticipated "Avengers" sequel.

At the very least, if you enjoyed "District 9"— quirks, action, plot and all — Blomkamp's latest won't disappoint.

Watch a trailer for the film below:

Join the conversation about this story »

No, Bridgewater didn't just build a team of robotic traders — they've had robot traders for 32 years

$
0
0

ray dalio

In February, Bloomberg reported that Ray Dalio's Bridgewater Associates, the world's largest hedge fund with $160 billion in assets, was building a new artificial intelligence team under senior technologist Dave Ferucci. It was to launch this month.

The report seemed to offer even a small glimpse into Bridgewater's mysterious investment approach and where it was heading. 

However, a Bridgewater representative tells Business Insider that the hiring of Ferucci was misconstrued. Bridgewater has been developing AI since 1983.

Here's the full statement:

There has been a lot of speculation in the media, as well as some misunderstanding, about what Bridgewater is doing in the area of artificial intelligence, and with Dave Ferrucci. We felt it was important to clarify this.

Ever since 1983 Bridgewater Associates has been creating systematic decision-making processes that are computerized. We believe that the same things happen over and over again because of logical cause/effect relationships, and that by writing one’s principles down and then computerizing them one can have the computer make high-quality decisions in much the same way a GPS can be an effective guide to decision making.

Like using a GPS, one can choose to follow the guidance or not follow it depending on how it reconciles. It is through this never ending reconciliation process that the computer decision-making system constantly learns, and the learning compounds over time.

It is because Bridgewater and Dave Ferrucci both have long and deep commitments to this area that Dave has recently joined Bridgewater. It would be a mistake to think that this is a new undertaking for Bridgewater or that the process being used at Bridgewater is like some artificial intelligence systems that are based on data-mining rather than well-examined logic.

SEE ALSO: Jeb Bush and Scott Walker had two very different lunches on Thursday

Join the conversation about this story »

NOW WATCH: This is what separates the Excel masters from the wannabes

Bill Gates thinks super machines could eventually become smarter than humans and take our jobs

$
0
0

Bill Gates

Microsoft founder and philanthropist Bill Gates thinks we have reason to be concerned about the threats artificial intelligence could pose to our future. 

According to Gates, there are two main ways artificial intelligence could become harmful: it could eventually substitute some human labor in the workplace, and it could grow to become more intelligent than humans.

These issues seem to be solvable, according to Gates, but one may be easier to address than the other.

Here's what he said to Re/code's Ina Fried on the subject:

There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.

Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem.

This isn't the first time Gates has spoken about artificial intelligence. About one year ago, he said software substitution for labor, whether it be for "drivers or waiters or nurses," is progressing when speaking at The American Enterprise Institute in Washington D.C. 

In a recent Ask Me Anything thread on Reddit, Gates also said he's "in the camp that is super concerned about artificial intelligence." 

Join the conversation about this story »

NOW WATCH: How to supercharge your iPhone in only 5 minutes

Steve Wozniak: 'Computers are going to take over from humans'

$
0
0

steve wozniak apple cofounder worried scared speaking

Apple cofounder Steve Wozniak has revealed that he's increasingly worried about the threat that Artificial Intelligence (AI) poses to humanity.

"Computers are going to take over from humans," the 64-year-old engineer told the Australian Financial Review. "No question."

Increasing numbers of prominent figures in the tech world have begun to speak up about the potential risks of AI. While truly intelligent machines (if actually theoretically possible) could be a boon to industry, they could also prove dangerous if they decided to turn on their creators.

Tesla CEO Elon Musk says that AI poses the"biggest existential threat" to humanity, and speaks frequently about the issue. And Microsoft founder Bill Gates says that within a few decades, AI will be "strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."Respected physicist Stephen Hawking has also said that AI could"spell the end of the human race."

Wozniak told the Australian Financial Review that"like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."

He adds: "Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that … But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines … well I'm going to treat my own pet dog really nice."

Join the conversation about this story »

NOW WATCH: The US Navy just unveiled a robot that can walk through fire

Here's why AI is not going to destroy humanity

$
0
0

artificial intelligence robotElon Musk is terrified of artificial intelligence (AI). The founder of SpaceX and Tesla Motors predicts it'll soon be "potentially more dangerous than nukes," and recently he gave $10 million toward research to "keep AI beneficial."

Stephen Hawking has likewise warned that "the development of full AI could spell the end of the human race."

Musk and Hawking don't fear garden variety smartphone assistants, like Siri.

They fear superintelligence—when AI outsmarts people (and then enslaves and slaughters them).

But if there's a looming AI Armageddon, Silicon Valley remains undeterred. As of January, as many as 170 startups were actively pursuing AI. Facebook has recruited some of the field's brightest minds for a new AI research lab, and Google paid $400 million last year to acquire DeepMind, an AI firm.

The question then becomes: Are software companies and venture capitalists courting disaster? Or are humankind's most prominent geeks false prophets of end times?

Surely, creating standards for the nascent AI industry is warranted, and will be increasingly important for, say, establishing the ethics of self-driving cars. But an imminent robot uprising is not a threat.

The reality is that AI research and development is tremendously complex. Even intellects like Musk and Hawking don't necessarily have a solid understanding of it. As such, they fall onto specious assumptions, drawn more from science fiction than the real world.

Are humankind's most prominent geeks false prophets of end times?

Of those who actually work in AI, few are particularly worried about runaway superintelligence. "The AI community as a whole is a long way away from building anything that could be a concern to the general public," says Dileep George, co-founder of Vicarious, a prominent AI firm. Yann LeCun, director of AI research at Facebook and director of the New York University Center for Data Research, stresses that the creation of human-level AI is a difficult—if not hopeless—goal, making superintelligence moot for the foreseeable future.

AI researchers are not, however, free from all anxieties. "What people in my field do worry about is the fear-mongering that is happening," says Yoshua Bengio, head of the Machine Learning Laboratory at the University of Montreal. Along with confusing the public and potentially turning away investors and students, Bengio says, "there are crazy people out there who believe these claims of extreme danger to humanity. They might take people like us as targets."

The most pressing threat related to AI, in other words, might be neither artificial nor intelligent. And the most urgent task for the AI community, then, is addressing the branding challenge, not the technological one. Says George: "As researchers, we have an obligation to educate the public about the difference between Hollywood and reality."

This article was originally published in the March 2015 issue of Popular Science, under the title "Artificial Intelligence Will Not Obliterate Humanity."

This article originally appeared on Popular Science

This article was written by Erik Sofge from Popular Science and was legally licensed through the NewsCred publisher network.

 

SEE ALSO: KURZWEIL: Human-Level AI Is Coming By 2029

Join the conversation about this story »

NOW WATCH: Why a NASA mission to Jupiter’s famous icy moon is now a priority

Some AI robots can already pass part of the Turing test

$
0
0

ex machinaArtificial Intelligence will rule Hollywood (intelligently) in 2015, with a slew of both iconic and new robots hitting the screen. From the Turing-bashing "Ex Machina" to old friends R2-D2 and C-3PO, and new enemies like the Avengers' Ultron, sentient robots will demonstrate a number of human and superhuman traits on-screen. But real-life robots may be just as thrilling. In this five-part series Live Science looks at these made-for-the-movies advances in machine intelligence.

The Turing test, a foundational method of AI evaluation, shapes the plot of April's sci-fi/psychological thriller "Ex Machina." But real-life systems can already, in some sense, pass the test. In fact, some experts say AI advances have made the Turing test obsolete.

Devised by Alan Turing in 1950, the computing pioneer's namesake test states that if, via text-mediated conversation, a machine can convince a person it is human, then that machine has intelligence. In "Ex Machina," Hollywood's latest mad scientist traps a young man with an AI robot, hoping the droid can convince the man she is human — thus passing the Turing test. Ultimately, the robot is intended to pass as a person within human society. [Super-Intelligent Machines: 7 Robotic Futures]

Last year, without so much kidnapping but still with some drama, the chatbot named "Eugene Goostman" became the first computer to pass the Turing test. That "success," however, is misleading, and exposes the Turing test's flaws, Charlie Ortiz, senior principal manager of AI at Nuance Communications, told Live Science. Eugene employed trickery by imitating a surly teenager who spoke English as a second language, Ortiz said. The chatbot could "game the system" because testers would naturally blame communication difficulties on language barriers, and because the teenage persona allowed Eugene to act rebelliously and dodge questions.

Turing performances like Eugene's, as a result, actually say little about intelligence, Ortiz said.

"They can just change the topic, rather than answering a question directly," Ortiz said. "The Turing test is susceptible to these forms of trickery."

Moreover, the test "doesn't measure all of the capabilities of what it means to be intelligent," such as visual perception and physical interaction, Ortiz said.

As a result, Ortiz's group at Nuance and others have proposed new AI tests. For example, Turing 2.0 tests could ask machines to cooperate with humans in building a structure, or associate stories or descriptions with videos.

Aside from the separate challenges of creating a realistic-looking humanoid robot, AI still faces a number of challenges before it could convincingly "pass" as a human in today's society, Ortiz said. Most tellingly, computers still can't handle common- sense intelligence very well.

For instance, when presented with a statement like, "The trophy would not fit in the suitcase because it was too big," robots struggle to decide if "it" refers to the trophy or the suitcase, Ortiz said. (Hint: It's the trophy.)

"Common sense has long been the Achilles' heel of AI," Ortiz said.

Check out the rest of this series: How Real-Life AI Rivals 'Chappie': Robots Get Emotional, How Real-Life AI Rivals 'Ultron': Computers Learn to Learn, How Real-Life AI Rival 'Terminator': Robots Take the Shot, and How Real-Life AI Rivals 'Star Wars': A Universal Translator?

Follow Michael Dhar @michaeldhar. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: Stephen Hawking thinks these 3 things have the potential to destroy humanity

SEE ALSO: KURZWEIL: Human-Level AI Is Coming By 2029

Join the conversation about this story »

NOW WATCH: A 1,100-Ft Wide Asteroid And Its Orbiting Moon Just Zoomed Past Earth


5 things Elon Musk believed would change the future of humanity... in 1995

$
0
0

Elon Musk Tesla

So what drives a game-changing figure like Elon Musk?

That's what Neil deGrasse Tyson was wondering when he got the chance to interview Musk for the March 22 episode of his StarTalk Radio podcast.

Musk enrolled at Queens University in 1990, where he studied for two years before transferring to the University of Pennsylvania. There, he spent three years getting a bachelor of Science degree in physics and one in Economics, graduating in 1995.

Most college graduates in those days had heads full of dreams of grandeur and world-changing innovations. Musk was just the same, he told Tyson in the podcast.

Here's what Musk said:

When you are starting out in college, in your freshman and sophomore year, you have these sort of sophomoric philosophical wanderings. And I tried to think of ok, what are the things, that seem to me that would most affect the future of humanity?

There were really five things, three of which that I thought would be interesting to be involved in. And the three that I thought would definitely be positive: theinternet, sustainable energy— both production and consumption, and space exploration, more specifically the extension of life beyond Earth.

Though I never thought I would actually be involved in that, it was something I'd thought would be important in the abstract. But not something I would ever have an option to be involved in.

The fourth one was artificial intelligence and the fifth one was rewriting human genetics.

These are just the five things I thought would most affect the future of humanity.

Musk has been involved in three of these industries, so far.

First, he made millions from his involvement in online payment company PayPal.

Musk is now chairman and CEO of Tesla Motors, which makes fancy electric cars — decreasing our reliance on oil and promoting sustainable energy options.

He's also Chairman of SolarCity, a company created by his cousins Peter and Lyndon Rive. Solar City provides energy services by selling and leasing solar panels to provide sustainable energy to home and business owners.

Beyond all those accomplishments, Musk also dove headfirst into the one goal he didn't think he'd be involved in: space exploration.

In 2002 — just 7 years after he finished university — he founded Space Exploration Technologies, commonly called SpaceX, using $100 million of his own money from selling PayPal to Ebay. The mission? To build a human colony on Mars.

In 2010, SpaceX became the first privately funded company to successfully launch a spacecraft, called the Dragon capsule, have it orbit the Earth, and recover it after the flight. Later, SpaceX became the world's first private company to send a spacecraft (that same one — the Dragon) to the International Space Station in 2012. Since then they have successfully run several more cargo missions to the space station for NASA.

They are currently working on making their rockets reusable — an incredible accomplishment that would cut space travel costs drastically. They also have plans for a Mars mission, to set up an off-Earth colony of humans to do precisely what Musk wanted — the extension of life beyond Earth. SpaceX will reportedly unveil their "Mars Colonizing Transporter" to do just that later this year.

Listen to the entire episode of StarTalk, also featuring Bill Nye the Science Guy and comedian Chuck Nice, here:

SEE ALSO: The world's most powerful particle accelerator just started running again — here's what it may find

Join the conversation about this story »

NOW WATCH: Watch SpaceX launch their Falcon 9 rocket right at sunset

Here's the concept art that inspired the robot from the year's best sci-fi movie

$
0
0

em machina lead

Artificial Intelligence has become a popular topic in movies recently, ranging as far in themes from gun-toting Chappie to the lovable Baymax in "Big Hero 6."

But "Ex Machina," opening in limited release Friday, features an AI so realistic that you'll be thinking about it long after the credits roll.

“Ex Machina” is the directorial debut of Alex Garland (best known for writing "28 Days Later" and "Dredd") and follows young programmer Caleb (Domhnall Gleeson) who is invited to stay with Nathan (Oscar Isaac), a reclusive Steve Jobs-like CEO of the company he works for. Once there, Caleb learns that Nathan has created one of the most sophisticated AIs and wants Caleb to test it to see how human it can be.

ExMachina2The AI, named Ava (played by newcomer Alicia Vikander), speaks and acts as any human being, but physically there's no mistaking that she's a robot.

This distinction was one Garland was adamant about.

We reached out to Mark Simpson, who previously worked with Garland on "Dredd," to learn more about the physical evolution of Ava. Simpson, who's known in the art world as Jock, was responsible for Ava's concept art — the drawings and designs the production uses as a starting point in the creation of the characters and sets. 

Jock shared six of his concept images for "Ex Machina" with Business Insider; taking us through his process for the creation of Ava.

Ex Machina Jock 1.PNG

To get Ava right, Garland and Jock spent a lot of time talking about what the movie's AI should not look like.

"We went through so many variations in the early stages of designing Ava. I started out with a figure much closer to human, with internal lights and a few subtle oddities in the joints, but Alex really pushed for a far more robotic look; and of course his instincts were right. To present something that is entirely mechanical, and then ask how the viewer feels about it, that's a really interesting question. These variations are somewhere in between those two initial ideas."

Ex Machina Jock 3.PNG

Here we see the evolution of Ava. 

"This image was one that got us closer to the final design. The breakthrough with Ava came when Alex came up with the idea of the mesh that would cover her entire body. In certain light, she'd look entirely mechanical, with her midriff and limbs missing — almost a typical 'robot' — but the light would catch the mesh as she turned, or in certain light would reveal a beautiful female form. I think it works incredibly well in the film — she looks completely seductive but entirely mechanical. This is obviously underpinned by Alicia's amazing performance and Double Negative's entirely convincing VFX."

Ex Machina Jock 2.PNG

Though most of the movie is inside Nathan's underground compound, Jock didn't know that when he was creating his concepts. This gave him the freedom to place Ava in any world he wanted.

"Very early concept work is completely free from the constraints of budget, location, and sets — or at least when I work with Alex he encourages that mind set. The practicalities of getting it on film are a problem to overcome later, and he's always keen for me to be free of any constraints in the conceptual stage. This image was obviously before production found the stunning Norway location where the exteriors were eventually shot, so it shows a different feel to the landscape."

Ex Machina Joack 4.PNG

But sometimes the concept art can inspire the way shots come together when filming takes place.

"This proved a popular image in production, and you can see this shot in varying degrees in the final film. There are smoked glass doors all over Nathan's mansion, providing glimpses of figures as they enter or leave. The metaphors are fairly obvious here, with Ava appearing slightly unseen and enigmatic."

Ex Machina Jock 5.JPG

Concepts also help express the kind of tone the film should have.

"This is probably my favorite image, but perhaps not the most obvious. For me it sums up, tonally, what the film is about; It's explicit, but has beauty. It's very naturalistic, but we also see the inner workings of the robot, giving a mechanical quality to the figure. I like that juxtaposition."

Ex Machina Jock 6.JPG

At the end of the day, the goal of the concept art is to be the first step of a character's life and their world — sometimes, even, the world they dream of.

"Another thematic idea, rather than a specific shot from the script; 'What would this robot look like in a natural environment?' 'What would it be looking for, once it was outside?' More often than not the more successful images come from a simple feeling rather than trying to manufacture a look. And this was one of those, produced very quickly."

Here's a finished look of Ava as she appears on the poster for "Ex Machina":ExMachina_Payoff_hires2_rgb

"Ex Machina" is currently in limited release and goes wide theatrically April 24.

SEE ALSO: Top scientists have an ominous warning about artificial intelligence

Join the conversation about this story »

NOW WATCH: This Sports Illustrated swimsuit rookie could become the next Kate Upton

The Apple Watch is a misunderstood bridge to the future

$
0
0

Apple Car with Tim Cook wearing Apple watch and car play

The first Apple Watch reviews came out this week, and they weren't great.

Reviewers did praise the design and the way it lets you leave your phone in your pocket more, and blogger John Gruber had a great non-cynical take on the "taptic" communications, which lets you send little taps and a representation of your heartbeat to other watch wearers — "Non-verbal, non-visual, physical communication across any distance. This could be something big."

But beneath headline phrases like "magical" and "bliss," reviewers had a lot of complaints. Most of them revolved around two points.

First, it wasn't clear exactly what problem any smart watch is supposed to solve, and the Apple Watch made it no clearer.

Second, the Watch itself seemed fussy to use. The controls were hard to figure out and required a "steep learning curve". It was too easy to hit the wrong icon. It sent way too many notifications.

All in all, it seems like reviewers were confused.

But this could be because the reviewers — and probably the people making apps for the watch, and maybe even Apple itself — are still stuck in the current mindset of how we use computers. Call it the smartphone mindset.

Apple Watch Event

For the last eight years or so, we've had computers more powerful than the ones NASA used to send men to the moon in our pockets. They're connected to the Internet all the time.

And the way we interact with them is mostly by looking at the screen and tapping with our fingers to make something happen. We take a picture, we send an email, we text, Facebook, Snapchat. 

If you apply these same habits to a tiny computer on your wrist, of course you're going to be disappointed. It's harder to read. It's harder to control. It's a more awkward way of doing the same things you're already doing quite easily with your smartphone.

But the future of computing is probably going to look quite different. If you look at what big tech companies like Google, Microsoft, Facebook, Cisco, Intel, IBM, and (yes) Apple are focusing on, you see a few common themes:

  • "Internet of things." It's a dopey term, but it basically means there will be little tiny computerized sensors everywhere, and those sensors will be able to connect to each other, to local private networks, and to the Internet. Or some combination of the three. Suddenly, inanimate objects will be able to do more than just sit there — they'll exchange information with each other, or send and receive simple signals that trigger events.
  • Artificial intelligence. Apple's Siri, Google Now, and Microsoft Cortana are all early examples of how artificial intelligence can help answer fairly simple questions. But the more interesting part comes when AI can help you anticipate and answer questions before you even know you have those questions. This is sometimes called anticipatory computing.
  • Passive interfaces. Virtual reality devices like Facebook's Oculus and augmented reality devices like Microsoft's Hololens and Google Glass have a very important difference from earlier computing devices like smartphones and PCs. You don't have to do anything with the actual device to get something out of it. You don't have to type, or move a mouse, or pick it up and touch the screen. You just put it on — and things happen. 

Apple Watch EventThis is where computing is going after the smartphone era. It will be everywhere, it will know what you want, and it won't require you to do anything to get something in return.

Ubiquitous, anticipatory, and passive. 

The Apple Watch is a small step forward in all three categories.

It's a tiny device that can communicate with all kinds of networks — Wi-Fi, Bluetooth, and short-range NFC for Apple Pay. It uses Siri to understand voice commands. It requires less attention than your phone, especially when you're just receiving simple messages like a tap or a heartbeat with it.

So look ahead a couple iterations and think of it this way:

  • You walk up to doors — your house, your car, your office, your hotel room — and they automatically unlock.
  • You get out of bed and the coffee machine automatically turns on. The lights turn on as you walk around the house.
  • You're within a block of a person you've highlighted and your watch tells you they're nearby and guides you so you accidentally-on purpose "run into them" (or, if you're so inclined, avoid them).
  • You walk into the lunch spot where you always order the same thing and it's already on the counter for you to pick up when you get in. You go to the grocery store and load your shopping cart up with items and walk out the door, and your credit card is automatically debited.
  • Your heartbeat gets irregular or stops for more than a second or two and the watch automatically calls your doctor, an ambulance, and your emergency contact. 

Love it or hate it, this is where personal technology is going. The underlying plumbing is almost there. The human desire to make tasks easier and more convenience will create the market. 

Apple could very well be the company that gets there first. If it sells millions of Apple Watches, as it almost certainly will, it'll have a head start on one critical part of the equation — the thing that identifies you to all these sensors and devices that are just waiting for something to trigger them.

Apple Watch Event

And don't forget that last year Apple introduced HomeKit, which is a set of technologies for app makers to connect to devices in the home, and CarPlay, which is the same thing for cars. All the pieces are falling into place. 

Or maybe the Apple Watch won't take us there. The leaders of one generation of computing are seldom the leaders of the next, even if they get there first — just look at the early Windows Mobile phones, which really tried to take the PC desktop and shrink it down to a tiny screen. Right direction, terrible implementation.

But this is where the entire computer industry is going, and Apple is once again trying to lead the way. 

READ THIS NEXT: The truth about the Apple Watch

Join the conversation about this story »

NOW WATCH: Trying the Apple Watch at the store won't persuade you to buy one

Why Artificial Intelligence in movies has been elevated thanks to this sci-fi must-see

$
0
0

ex machina osscar

Artificial Intelligence has fascinated filmmakers as far back as the 1920s with Fritz Lang's dazzling "Metropolis.” Recently, AI has found its way more frequently into movies.

Look at "Her,""Big Hero 6,""Chappie," and the upcoming "Avengers: Age of Ultron," and it seems there's no limit to how storytelling can implement our curiosity towards soulless devices programed to have all the (good and bad) traits of humans.

So when screenwriter Alex Garland ("28 Days Later,""Dredd") decided to examine this fertile ground with his directorial debut, "Ex Machina," he knew his AI had to be different.

The film follows a young programmer named Caleb (Domhnall Gleeson) who wins a contest to meet Nathan (Oscar Isaac), the famous CEO of the Google-like company where he's employed. After a helicopter leaves Caleb on Nathan's secluded compound, he learns that Nathan has created an AI and wants him to conduct the Turing test on it; which determines whether it can pass as a human. 

The AI is named Ava, and though there's no mistaking she's a robot with her visible metal skeletal structure and exposed inner workings, newcomer Alicia Vikander gives her such an emotional presence that at times Ava comes across as a living, breathing human.

em machina leadThe physical presence of Ava was something Garland took a lot of thought in. The challenge he saw was not just making Ava look more robotic than humans but also create a fresh robotic look to present to movie audiences.

"There was this huge danger the first time Ava walks onto the screen that the first thing you do is think about another movie," said Garland.

Ex Machina Jock 1.PNG

In creating concept art with UK artist Jock, Garland pushed aside any iconic look that would make you think of another movie bot. They couldn't use a gold color because it brought up too many recollections of C-3PO from the "Star Wars" saga. They also decided not to use white as it made them think of the robot in Björk's music video for “All is Full of Love” or the robots from Will Smith’s 2004 movie "I, Robot."

irobot robotFinally they landed on mesh, and it just fit.

"Under certain lighting conditions it would give a kind of glimpse of the female form but almost drew your attention to the machine's skeleton structure inside Ava," said Garland.

ex machina“I looked like Spider-Man,” said Vikander, who spent four-and-a-half hours in hair and makeup before each shooting day to become Ava.

“The silver mesh covered my whole body and went up to a ball cap. So my forehead in the film was actually built into the suit,” she tells BI. “I would get there at 3:50am, so they built for me a little stick with a tennis ball at the end, because I couldn’t have a headrest. So when they finished getting my hair and forehead all made up I would prop my head on that and went back to sleep while they did the rest.”

ExMachina5Though Garland feels there's more to the film than just AI — specifically, the control major technology companies have on our daily lives — he knew that Ava needed to be that bridge to take you deeper into the story, and for it to work she had to look unique. 

"You need to be locked into the same experience Caleb [the main character] is having," Garland explains. "Anything else takes you out of the moment you should be in."

"Ex Machina" is now open in limited release and will be out nationwide April 24.

SEE ALSO: Some AI robots can already pass part of the Turing test

Join the conversation about this story »

NOW WATCH: This Scientology documentary made HBO hire 160 lawyers — here's the trailer

Ginni Rometty is now referring to IBM's super smart computer Watson as a 'he' (IBM)

$
0
0

watson jeopardy ibm

A couple years ago, IBM CEO Ginni Rometty told the world that there was nothing to fear when it came to Watson, IBM's computer that is, arguably, the smartest, most human-like computer ever built.

In an onstage interview at the time, she said IBM was working on making Watson even smarter. So smart that it could think and reason, even argue, like a human. But she said this was no reason to worry.

It's a service. Do not be afraid. It is really, truly an advisor to a decision-making process. There are many things the human brain does that is not imitated ... Think of it as an assistant.

But, it seems like something has changed at IBM as Watson grows ever smarter. So smart, that the computer just penned a cookbook, is revamping healthcare, and is available as a cloud service that lets anyone tap into its mega analysis brain.

Ginni Rometty smallAs it becomes more human, Rometty is starting to think of it as a "he," not an "it."

In an interview on Charlie Rose that aired Thursday night, Rose asked her about IBM's huge new push into healthcare thanks to Watson. She replied (emphasis ours):

What Watson can do -- he looks at all your medical records. He has been fed and taught by the best doctors in the world. And comes up with what are the probable diagnoses, percent confidence, why, rationale, diagnosis, odds, conflicts. I mean, that has just started to roll out in Southeast Asia, to a million patients. They will never see the Memorial Sloan Kettering Cancer Center, as you and I have here. [But] they will have access. I mean, that is a big deal.

We're not saying Watson is like Skynet, the mythical evil human-hating computer network in the "Terminator" movies.

But some big names in tech have begun to send fearful messages out about computers as they grow smarter.  Elon Musk says he thinks artificial intelligence will become "our biggest existential threat."

And Bill Gates has warned, multiple times, that there are long-term concerns with computers that are smarter-than-humans, and that in the near term, software could be killing off people's livelihoods, if not their actual lives.

Here's the clip where Rometty talks about Watson. In the early parts of the interview she refers to Watson as an "it," but later, slips into "he."

Is it unnerving that Watson has become so human to the people who created it (or should we say, created him?).

 

SEE ALSO: 17 IBM rock star employees that show the company's new direction

Join the conversation about this story »

NOW WATCH: How to supercharge your iPhone in only 5 minutes

A robot just started her job as the receptionist at Japan's oldest department store

$
0
0

Humanoid ChihiraAico, clad in a Japanese kimono, greets a customer at an entrance of a department store in Tokyo, on April 20, 2015

Tokyo (AFP) - She can smile, she can sing and this robot receptionist who started work in Tokyo on Monday never gets bored of welcoming customers to her upmarket shop.

"My name is ChihiraAico. How do you do?" she says in Japanese, blinking and nodding to customers in the foyer of Mitsukoshi, Japan's oldest department store chain.

Clad in an elegant traditional kimono, ChihiraAico -- a name that sounds similar to a regular Japanese woman's name -- breaks into a rosy-lipped smile as would-be shoppers approach.

Unlike her real-life counterparts -- almost always young women -- who welcome customers to shops like this, ChihiraAico cannot answer questions, but simply runs through her pre-recorded spiel.

The android, with lifelike skin and almost (but not quite) natural-looking movements, was developed by microwaves-to-power stations conglomerate Toshiba, and unveiled at a tech fair in Japan last year.

"We are aiming to develop a robot that can gradually do what a human does," said Hitoshi Tokuda, chief specialist at Toshiba.

"The standard of customer service in this Mitsukoshi flagship store is top quality and this is a great opportunity to see what role our humanoid can play in this kind of environment."

ChihiraAico will receive customers at the store until Tuesday, before taking part in a series of promotional events over the upcoming Golden Week holidays.

The humanoid is not the first robot to begin customer service in Japan -- the wisecracking Pepper, a four-foot (120 centimetre) machine with a plastic body perched on rollers, flogs coffee machines and mobile phones.

Join the conversation about this story »

NOW WATCH: This couple wants to turn America's roads into gigantic solar panels

WHEN ROBOTS COLLUDE: Computers are adopting a legally questionable means to crush the competition

$
0
0

Two robot friends

Algorithms can learn to collude. 

Two law professors, Ariel Ezrachi of Oxford and Maurice E. Stucke of the University of Tennessee, have a new working paper on how when computers take get involved in pricing for goods and services (like, say, at Amazon or Uber), the potential for collusion is even greater than when humans are making the prices. 

Computers can't have a back-room conversation to fix prices, but they can predict the way that other computers are going to behave. And with that information, they can effectively cooperate with each other in advancing their own profit-maximizing interests. Ezrachi and Stucke explain: 

Computers may limit competition not only through agreement or concerted practice, but also through more subtle means. For example, this may be the case when similar computer algorithms promote a stable market environment in which they predict each other’s reaction and dominant strategy. Such a digitalised environment may be more predictable and controllable. Furthermore, it does not suffer from behavioral biases and is less susceptive to possible deterrent effects generated through antitrust enforcement.

The problem is that the law hasn't caught up to the technology. The first prosecution for this type of collusion wrapped up last month, but the law is still way behind. More frighteningly, it isn't clear if it can ever catch up.

Sometimes, a computer is just a tool used to help humans collude, which theoretically can be prosecuted. But sometimes, the authors find, the computer learns to collude on its own. Can a machine be prosecuted?

In a type of algorithmic collusion the authors call Autonomous Machine, "the computer executes whichever strategy it deems optimal, based on learning and ongoing feedback collected from the market. Issues of liability, as we will discuss, raise challenging legal and ethical issues."

How does antitrust law punish a computer? If an algorithm isn't programmed to collude, but ends up doing so independently through machine learning, it isn't clear that the law can stop it. 

(via Jill Priluck)

SEE ALSO: Bitcoin regulation is coming to New York

Join the conversation about this story »

NOW WATCH: Watch this robot carve one of Michelangelo's unfinished works


Here are facial expressions of the best humanoid robots

$
0
0

Yingyang Humanoid RobotMeet "Yangyang," China’s newest robot with the freakish ability to mimic human facial expressions. 

Earlier this week, Yangyang wore a full-length red coat, glasses, and lipstick, while she interacted with visitors at the Global Mobile Internet Conference in Beijing. 

RTX1AR58According to the DailyMail, the robots' resemblance to former Alaska governor Sarah Palin is completely accidental. 

Here is Yangyang and Palin:

sarah palin robotPresented earlier this month at a Hong Kong electronics fair, is America's equivalent "Han" from Hanson Robotics. 

Controlled with a cell phone app, Han has approximately 40 motors embedded in his face that allow him to make realistic facial expressions, Reuters reports.

RTR4XU24

"He has cameras on his eyes and on his chest, which allow him to recognize people's face, not only that, but recognize their gender, their age, whether they are happy or sad," product manager at Hanson Robotics, Grace Copplestone told Reuters.

Here are some of Han's facial expressions:Hangif1

Hangif2

Hangif3

Here's a video of Han's facial expressions: 

Join the conversation about this story »

NOW WATCH: This year's 'Call of Duty' pits human soldiers against futuristic robots

EXPERT: We've pretty much completely ignored safety factors in AI research until now

$
0
0

em machina lead

As the new science fiction film "Ex Machina" puts it, artificial intelligence (AI) could be "the greatest scientific event in the history of man."

But tech giants like Bill Gates and Elon Musk have also warned that AI is one of the greatest existential risks to humanity if we don't research and develop it in a responsible way.

And according to Stuart Russell, a professor of computer science and engineering who appeared on an AI-themed episode of NPR's Science Friday, scientists really haven't been thinking about the safety concerns of AI at all until very recently.

That's because we didn't understand the nature of the problem, Russell said. AI development so far has been a race to create better and smarter models.

"We were focused on making machines smart because dumb machines are not very useful," Russell said. "It wasn't clear to most people in the field why really smart wouldn't be really good.”

Now more and more people are raising concerns about the future of AI development because these technologies are becoming increasingly sophisticated and more ubiquitous, Eric Horvitz, managing director of the Microsoft Research Lab, said during the episode. We plan our trips with GPS that came out of an AI algorithm, our smartphones can understand our speech, and Facebook can recognize our faces in the photos we post. AI is creeping into our lives and it's finally getting people thinking about where this is all going, Horvitz said.

And a big concern is what would happen if we create a robot that's smarter than us.

"The smarter machines get, the more careful we have to be to make sure that the objectives we give them are exactly aligned with what we want,” Russell said. "If you don't give them the right instructions, if you don't give them objectives that are perfectly aligned with what the human race wants, then you have a problem," Russell said.

That's not easy though because the human race isn't very good at defining what we want in the first place, Russell said. We have no idea what kind of loopholes we might leave in our instructions. And if we create a machine capable of learning so much that it eventually becomes smarter than us, then it can process more information than us, it can look farther ahead than us, and it can anticipate all of our counter moves, Russell said.

That's why it's crucial to start researching and testing out safety measures for AI.

Physicist Max Tegmark is trying to kickstart that research with the new Future of Life Institute.

"I'm a pretty cheerful guy, so to me the interesting question isn’t how worried we should be, but rather figuring out what we can do to help and then actually start doing it,” Tegmark said.

The goal isn't to hit the brakes on AI research and development. We just need to start doing it better and more responsibly, Tegmark said.

Musk recently donated $10 million to the Future of Life Institute to help fund AI safety research.

You can listen to full Science Friday episode below:

SEE ALSO: Who's set to make money from the coming artificial intelligence boom?

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

'Avengers: Age Of Ultron' is a masterful film that asks big questions

$
0
0

When you have a movie that's so universally beloved as "The Avengers," how do you follow that up?

avengers age of ultron

Joss Whedon's answer: You make a movie that's funnier, more human, and more challenging — not just for the characters on screen, but for the audience as well.

"Age of Ultron" provides two big meals to digest: One for the eyes, a visual masterpiece brimming with special effects that add meaning to the picture, not just noise; and one for the mind, where questions about concepts in the film will linger with you long after the credits roll.

Should we ever try to stop wars before they start? Should we "meddle" with artificial intelligence? What happens if intelligent robots decide that humans are obstacles, not tools, to making the world a better place? These are just some of the questions that we, and The Avengers, must grapple with, even after the movie ends.

avengers age of ultronWhedon's Avengers are more cooperative in the sequel than the original, but that doesn't mean they agree all the time. Captain America, for example, having lived through World War II, is strongly against wars of all kinds — he just wants to go home. Tony Stark, on the other hand, with his Iron Man suit and endless technological resources, believes he has the power to prevent future wars — but that involves creating weapons that occasionally backfire, like his "Ultron."

(This political tension will come to fruition in next year's Marvel film, "Captain America: Civil War," where Captain America and Iron Man famously go at it after legislation is passed requiring all superheroes register their identities with the government.)

Taking stances on these big issues is inherent to The Avengers' various personalities, and it's another big reason why "Age of Ultron" exceeds the original: The characters are much more opinionated, more human, and therefore likable. Some of the film's quieter moments offer glimpses into these characters we've never seen before — and it helps us understand why they're heroic, not just superpowered.

The film is, in general, much funnier than previous Marvel films, too, and up to par with "Guardians of the Galaxy" in terms of jerky humor, particularly when it comes to dialogue. This is where Whedon really shines: Banter feels realistic but screwbally at the same time, which helps keep the film fresh throughout its 2 hour, 21-minute run time. Jeremy Renner's Hawkeye is a particular standout.

avengers age of ultronOf course, this isn't to say the movie is perfect. It's quite disjointed at times, and some characters leave you wanting more, and not really in a good way. But by and large, this film is a major accomplishment: It's well-paced and rarely boring, consistently funny, and offers plenty of references to past and future movies for Marvel fans to gobble up. And with so much going on, this should theoretically make it ideal for multiple viewings — I'll be interested to see how the film holds up after a second time.

In my opinion, "Avengers: Age Of Ultron" is a masterful film that feels jam-packed without being overwhelming. There's plenty of action, but the action sequences are used as tools to keep the film moving forward — this isn't about explosions for the sake of explosions. That said, the film thankfully never takes itself too seriously, despite all the issues we must contend with.

Between acting, the superb writing, the visuals, and its importance as a bridge to "Phase Three" of Marvel's Cinematic Universe, Whedon has pulled off a strange, funny, horrifying, and passionate love letter to Earth's Mightiest Heroes, that, to me, is even better than the first one in 2012. Let's hope the Russo brothers can keep Whedon's tradition going with "Avengers: Infinity War Part 1," due in 2018.

SEE ALSO: There is one mid-credits scene in 'Avengers: Age of Ultron' — Here's what it means for future Marvel movies

Join the conversation about this story »

NOW WATCH: Kids settle the debate and tell us which is better: an Apple or Samsung phone

This Stanford professor just sold his second startup to Google in less than 5 years (GOOG)

$
0
0

yoav shoham

Google swallowed up another startup on Monday: Timeful, an intelligent calendar and time management app. 

The deal comes less than a year after Timeful launched. And it's not the first time the guy behind the company has sold a startup to Google. 

Yoav Shoham, who is the co-founder and Chairman of Timeful, sold another startup to Google in 2011.

That last company was called Katango, and was an app to help organize friends on social networks. The company raised $5 million in funding, according to Crunchbase, and Google folded it into its then-new Google+ social network.

Shoham is an expert in artificial intelligence and a computer science professor at Stanford University, where Google was created in the late 1990s and which is not far from Google's headquarters. The bio on his Stanford page says that he "has worked in various areas of AI, including temporal reasoning, nonmonotonic logics and theories of commonsense."

He stayed at Google as a part-time employee until 2014, according to his LinkedIn bio which describes his role during the period as "trying to do no harm, and maybe even bring value."

His latest company, Timeful, uses technology to help consumers manage their time. As Business Insider wrote in August, the app takes note of "your scheduling behaviors and intelligently suggest a time that's right for you that makes Timeful unique." Timeful also hired LinkedIn data scientist Gloria Lau back in October.

That technology could come in handy for Google, which has been developing a lot of similar "predictive" features with its Google Now service. The deal was announced on Google's Gmail blog, but Google did not say how much it paid for Timeful. 

Timeful raised $7 million from investors including Khosla Ventures and Kleiner Perkins Caufield & Byers, according to Crunchbase. 

SEE ALSO: Meet the man responsible for Google's billion-dollar acquisitions

Join the conversation about this story »

NOW WATCH: How one simple mistake cost 'Real Housewives' superstar Bethenny Frankel millions

This restaurant has a new secret weapon: a robot that slices the perfect noodle faster than any human

$
0
0

Screen Shot 2015 05 06 at 2.28.14 PM

China has a new celebrity, and it's not another 7' 6" giant who can dunk a basketball or martial-arts master taking center stage in American action movies.

It's a noodle-slicing robot named Foxbot, who can be found at Dazzling Noodles, an open-kitchen restaurant chain in North China's Shanxi province.

Not only does Foxbot make the perfect knife-cut noodles, a specialty of Shanxi, but it does it  than any human hand and can clean itself, according to a recent article in the Wall Street Journal.

The masterminds behind the technology are the engineers at Foxconn Technology Group, one of the largest contract manufacturers of electronic devices, most notable for supplying Apple products.

Foxconn has made three noodle-slicing devices for Dazzling Noodles, and a fourth is on its way, reported the WSJ. They're also working on incorporating technology that will allow the Foxbot to handle more cooking tasks. 

Foxbot may be quicker and cleaner than human hands, but can the robotic arm produce the same quality noodle that experienced chefs have been hand-cutting for years?

In a blind taste test, one customer enjoyed the robot-arm noodles more, claiming they were chewier.

The Foxconn team is confident they can replicate the traditional noodle with their advanced technology, which allows them to adjust Foxbot's knife by .01 millimeters — the diameter of a human hair — in order to produce the ideal noodle. 

china noodlesRestaurant owner Yue Mei, who proposed the idea of a restaurant robot to Foxconn in the first place, is excited about the advancements and does not express concern about noodle quality. 

"When I first started the restaurant, I realized that food standardization is a must-have if I want to build a Chinese restaurant chain," she told the WSJ. "As a native Shanxi person, I feel that this is my mission to promote knife-cut noodles. I want to inherit it and make it flourish."

As for the jobs of the chefs and staff at Dazzling Noodles, it's too soon to know how they will be affected.

SEE ALSO: A Chinese company is replacing 90% of its workers with robots

Join the conversation about this story »

NOW WATCH: 7 clichés you should never use in a job interview

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>