Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

IBM CEO: Artificially intelligent computers will 'change who you are' (IBM)

$
0
0

IBM CEO Ginni Rometty

IBM CEO Ginni Rometty has a pretty startling vision for the future.

She sees the day when every decision humans make —from what we eat to how businesses operate — are done in consultation with human-like computers. 

This won't just change your routines, she believes "It will change who you are," she told a crowd during a keynote speech at Consumer Electronics Show Wednesday night.

IBM is working on making such super intelligent, human-like machines (which IBM calls "cognitive computing").

"This is an era of systems you do not program. They understand, they reason and they learn," she explains.

The ultimate example she gave was IBM's Watson, the smart, talking computer that can learn, reason, analyze.

Watson is still best known for winning Jeopardy in 2011 but it has "come a long way since then," Rometty said to the crowd.

"At that time what Watson could do was question and answer (which he did beat everyone) and he had five technologies," she says. Today Watson does 32 different functions with expertise in 50 different areas.

IBM has since turned the Watson technology into a bunch of different cloud computing services, allowing developers to embed its smarts into their apps or their devices.

80,000 developers working with Watson

SoftBank Robot PepperSo far, IBM has 80,000 developers in 36 countries who have put Watson into their apps or products, she said during the speech. IBM has another 500 Watson partners who provide additional offerings or services for Watson.

IBM continues to roll out more flavors of Watson, too, like a new Internet of Things-specific Watson unit and a health care-specific one. (Rometty believes that IBM will help usher in a new area of personalized, affordable health care, and that this will be one of her biggest legacies). 

Meanwhile, there's an increasingly long list of companies using Watson to create everything from Under Armor's new uber-personalized fitness app to a popular Japanese robot called "Pepper" doing the job of customer service rep at retail, banks, hotels and other places.

Still, she says, IBM has even bigger plans to make Watson more human.

"We're expanding Watson's senses, giving him things like sight. The first way he's learning sight is by reading medical images," she explains. 

She didn't mention giving Watson a sense of smell or taste, but he does already cook. There's a version called Chef Watson.

Explaining her vision for IBM and the future of humanity

IBM Outtthink crowdRometty has been under intense pressure from Wall Street throughout 2015 as she revamps IBM to become the company she envisions.

She's been selling off business units and reducing IBM's workforce (by selling units and through layoffs). Revenue has been in decline.

In the meantime, under her reign, she's been aggressively investing in Cognitive Computing and related businesses. She's plowed about $26 billion into it, IBM says, including more than 30 acquisitions of analytics related technologies.

Rometty says that her shift is already paying off. IBM's growth businesses were generating over $25 billion by the end of 2014 and "through the third-quarter of this year, they've grown 30%," she said at CES.

These units include analytics, computer security tech, cloud computing services and selling cloud computing technology for companies to install in their own private data centers (a traditional IBM business).

In the meantime, she's out to explain her vision of smart computers in our lives and changing who we are.

"I believe we will all reinvent ourselves. And you see a reinvented IBM emerging," she says, adding that she believes that IBM's smart computer technologies will reach millions "if not billions of individuals." 

SEE ALSO: The 53 startups that will be huge in 2016, according to venture capitalists

Join the conversation about this story »

NOW WATCH: 4 ways to make your workday more productive


The 20 most important scientific developments of 2015

$
0
0

Big things happened in the science world in 2015.

Researchers edited the DNA of human embryos, showing that doing so was possible. Their work had flaws, but it demonstrated that we're capable of using incredibly powerful new tools to reshape the building blocks of life in ways that were unimaginable just a few years ago.

Elon Musk's SpaceX launched a rocket that was able to carry a satellite into space before turning around and landing back on Earth, showing that re-using rockets is feasible. That's essential if humanity is to become an interplanetary species.

And that's just the start — we also found traces of water on Mars, planned ways to mine asteroids, and so much more.

The team at Futurism has compiled an infographic with 20 of the biggest stories in the science and tech worlds that happened over the past year. They focus on the development of artificial intelligence, space exploration, drones, genetic editing, and the winner of their first "Futurist of the Year" award — fittingly, Elon Musk. (They also give props to a robot who gave not a full-fledged TED talk, but a TEDx talk — still pretty impressive for an android.)

Check out their picks below, or see more at Futurism's "Year in Science" page.

Futurism_ThisYearinScience 2015

Join the conversation about this story »

NOW WATCH: This animal-like robot gets up on its own when you knock it over

The mobile revolution is over. Get ready for the next big thing: Robots

$
0
0

her joaquin phoenix

The computer industry moves in waves. We're at the tail end of one of those waves — the mobile revolution.

What's next? Robots.

But not the way you think.

The robot revolution won't be characterized by white plastic desk lamps following you around asking questions in a creepy little-girl voice, like I saw at last week's Consumer Electronics Show in Las Vegas. That might be a part of it, but a small part.

Rather, it'll be characterized by dozens of devices working on your behalf, invisibly, all the time, to make your life more convenient.

Some people in the industry use the term "artificial intelligence" or "digital assistants." Others talk about "smart" devices. But none of these terms capture how widespread and groundbreaking this revolution will be. This isn't just about a coffee maker that knows to turn itself on when your alarm goes off, or a thermostat that adjusts to your presence.

(And "Internet of Things"— please stop already.) 

This is about every piece of technology in your life working together to serve you. Robots everywhere, all the time.

Not like the Roomba. More like the movie "Her."

Where we've been

Every 10 or 15 years, a convergence of favorable economics and technical advances kicks off a revolution in computing. Mainstream culture changes dramatically. New habits are formed. Multibillion-dollar companies are created. Companies and entire industries are disrupted and die. 

I've lived through three of these revolutions.

  • The PC revolution. This kicked off in the 1980s with the early Apple computers and the quick-following IBM PC, followed by the PC clones. Microsoft and Intel were the biggest winners. IBM was most prominent among the big losers, but there were many others — basically, any company that thought computing would remain exclusively in the hands of a few huge computers stored in a data center somewhere. By the end, Microsoft's audacious dream of "a computer on every desk and in every home" was real.
  • The internet revolution. This kicked off in the mid 1990s with the standardization of various internet protocols, followed by the browser war and the dot-com boom and bust. Amazon and Google were the biggest winners. Industries that relied on physical media and a distribution monopoly, like recorded music and print media, were the biggest losers. By the end, everybody was online and the idea of a business not having a website was absurd.
  • The mobile revolution. This kicked off in 2007 with the launch of the iPhone. Apple and Samsung were the biggest winners. Microsoft was among the big losers, as its 20-year monopoly on personal computing finally broke. 

steve jobs unveils first iphone

A couple of important points.

First, when a revolution ends, that doesn't mean the revolutionary technology goes away.

Everybody still has a PC. Everybody still uses the internet.

It simply means that the technology is so common and widespread that it's no longer revolutionary. It's taken for granted. 

So: The mobile revolution is over.

More than a billion smartphones ship every year. Apple will probably sell fewer iPhones this year than last year for the first time since the product came out. Huge new businesses have already been built on the idea that everybody will have an internet-connected computer in their pocket at all times — Uber wouldn't make sense without a smartphone, and Facebook could easily have become a historical curiosity like MySpace if it hadn't jumped into mobile so adeptly.

This doesn't mean that smartphones are going away, or that Apple is doomed, or any of that nonsense. But the smartphone is normal now. Even boring. It's not revolutionary.

The second thing to note is that each revolution decentralized power and distributed it to the individual.

Bill Gates kid The PC brought computing power out of the bowels of the company and onto each desk and into each home. The internet took reams of information that had been locked up in libraries, private databases, and proprietary formats (like compact discs) and made it available to anybody with a computer and a phone line.

The smartphone took those two things and put them in our pockets and purses.

Tomorrow and how we get there

This year's CES seemed like an "in-betweener." Everybody was looking for the next big thing. Nothing really exciting dominated the show. 

There were smart cars, smart homes, drones, virtual reality, wearable devices to track athletic performance, smart beds, smart luggage (really), and, yeah, weird little robots with anime faces and little-girl voices.  

But if you look at all these things in common, plus what the big tech companies are investing in right now, a picture starts to emerge. 

  • Sensors and other components are dirt cheap. Thanks to the mobile revolution creating massive scale for the components that go into phones and tablets, sensors of every imaginable kind — GPS, motion trackers, cameras, microphones — are unimaginably cheap. So are the parts for sending bits of information over various wireless connections — Bluetooth LTE, Wi-Fi, LTE, whatever. These components will continue to get cheaper. This paves the way for previously inanimate objects to collect every kind of imaginable data and send simple signals to one another. 
  • Every big tech company is obsessed with AI. Every single one of the big tech companies is working on virtual assistants and other artificial intelligence. Microsoft has Cortana and a bunch of interesting behind-the-scenes projects for businesses. Google has Google Now, Apple has Siri, Amazon has Echo, even Facebook is getting into the game with its Facebook M digital assistant. IBM and other big enterprise companies are also making huge investments here, as are dozens of venture-backed startups. 
  • Society is ready. This is the most important point. Think about how busy we are compared with ten or twenty years ago. People work longer hours, or stitch together multiple part-time jobs to make a living. Parenting has become an insane procession of activities and playdates. The "on-demand" economy has gone from being a silly thing only business blogs write about to a mainstream part of life in big cities, and increasingly across the country — calling an Uber isn't just for Manhattan or San Francisco any more. This is the classic situation ahead of a computing revolution — everybody needs something, but they don't know they need it yet.

hello barbie ai artificial intelligence mattel

So imagine this. In 10 years, you pay a couple-hundred bucks for a smart personal assistant, which you install on your phone as an app. It collects a bunch of information about your actions, activities, contacts, and more, and starts learning what you want. Then it communicates with dozens of other devices and services to make your life more convenient.

Computing moves out of your pocket and into the entire environment that surrounds you.

Your alarm is set automatically. You don't need to make a to-do list — it's already made. Mundane phone calls like the cable guy and the drugstore are done automatically for you. You don't summon an Uber — a car shows up exactly when you need it, and the driver already knows the chain of stops to make. (Eventually, there won't be a driver at all.)

If you're hungry and in a hurry, you don't call for food — your assistant asks what you feel like for dinner or figures out you're meeting somebody and orders delivery or makes restaurant reservations. The music you like follows you not just from room to room, but from building to building. Your personal drone hovers over your shoulder, recording audio and video from any interaction you need it to (unless antidrone technology is jamming it). 

DJI drone storeAt first, only the wealthy and connected have this more automated lifestyle. "Have your assistant call my assistant." But over time, it trickles down to more people, and soon you can't remember what life was like without one. Did we really have to make lists to remember to do all this stuff ourselves?

This sounds like science fiction, and there's still a ton of work ahead to get there. Nobody's invented the common way for all these devices to speak to each other, much less the AI that can control them and stitch them together. So this revolution is still years away. But not that far.

If you try to draw a comparison with the mobile revolution, we're still a few years from the iPhone. We're not even in the BlackBerry days yet. We're in the Palm Pilot and flip-phone days. The basic necessary technology is there, but nobody's stitched it together yet.

But when they do — once again — trillion-dollar companies and industries will rise and fall, habits will change, and everybody will be blown away for a few years. Then, we'll all take it for granted. 

SEE ALSO: I WAS A CES NEWBIE: Here's what I learned in Las Vegas swarming with 170,000 nerds

Join the conversation about this story »

NOW WATCH: This tiny droid may be the smartest robot we’ve ever seen

Chinese search engine Baidu released some of its code after Eric Schmidt urged tech companies to join forces on AI

$
0
0

eric schmidt

Baidu, China's leading internet search engine, has released some of its AI (artificial intelligence) code less than a week after former Google CEO Eric Schmidt said technology companies need to start working together on AI if humans want to get the most out of machines.

Schmidt, now executive chairman of Alphabet, Google's parent company, claimed last Monday that AI has the potential to fix some of the world’s "hard problems," including population growth, climate change, human development, and education. In order for this to happen, however, he stressed that companies need to start working together on AI and publish their AI breakthroughs to the academic community.

Four days after Schmidt's remarks, Baidu Research's Silicon Valley AI Lab (SVAIL) published some AI code, known as "Warp-CTC", on code repository GitHub.

The now-public code has been used to build a Baidu speech-recognition system called Deep Speech 2, which can recognise certain short sentences better than humans. It's useful technology for Baidu because the company's many millions of customers often prefer to engage with Baidu services using their voice as typing Chinese characters into a smartphone can be difficult.

Baidu's "Warp-CTC" tool can plug into existing machine learning frameworks being developed by startups and other companies to significantly speed up their AI development efforts. MIT Technology Review reports that a machine learning startup called Nervana, which offers a deep-learning framework to companies that don't have the know-how or resources to develop their own, is already using Warp-CTC in its software.

Yahoo data dump

Last Thursday Yahoo gave machine learning scientists access to a huge dataset in a bid to help them develop computer programs that can think and learn for themselves.

"Data is the life-blood of research in machine learning," said Suju Rajan, director of personalisation science at Yahoo Labs. "However, access to truly large-scale datasets is a privilege that has been traditionally reserved for machine learning researchers and data scientists working at large companies — and out of reach for most academic researchers."

The dataset is a collection of anonymised user interactions with the news feeds on websites like Yahoo News and Yahoo Sports. Yahoo says there are 110 billion events in the 13.5 terabyte file, which is more than 10 times the size of the previous largest dataset released.

Google and Facebook have also published AI code, research, and datasets that help machine learning scientists.

China's internet giants have been slower off the mark, possibly because they see their code as important intellectual property that gives them a competitive advantage over their rivals.

Join the conversation about this story »

NOW WATCH: How Apple makes their Geniuses always seem so happy and helpful

Neil deGrasse Tyson and futurist Ray Kurzweil on what will happen to our brains and everything else

$
0
0

Astrophysicist and StarTalk Radio host Neil deGrasse Tyson sits down with futurist and author Ray Kurzweil for the first time. 

Produced by Kamelia Angelova, Kevin Reilly, and Darren Weaver and by StarTalk Radio, a Curved Light Production, executive producer Helen Matsos, and producer Laura Berland.

Follow TI: On Facebook


StarTalk Radio is a podcast and radio program hosted by astrophysicist Neil deGrasse Tyson, where comic co-hosts, guest celebrities, and scientists discuss astronomy, physics, and everything else about life in the universe. Follow StarTalk Radio on Twitter, and watch StarTalk Radio "Behind the Scenes" on YouTube.

Join the conversation about this story »

This tech CEO has been given $200 million to invest in 'garage' start-ups

$
0
0

Ex Machina1

Tech Mahindra, a $4-billion (£2.8 billion) IT services firm in India, is aiming to invest $200 million (£141 million) in what its boss calls "garage startups."

CEO CP Gurnani told Business Insider he has a remit from his board to find some of the world's most exciting startups in artificial intelligence, automation and computing to invest in. 

"The board has said I can spend up to $200 million in startups from across the world. In tandem, the board has also appointed an investment committee to go through each proposal," said Gurnani, who is also the Vice Chairman of the Indian IT Trade body Nasscom.

"What do those startups look like? Well, for example we invested $30 million in a California startup that is looking into virtual reality while, in India, we committed $5 million to a startup that looks into automation of data centres. Some startups will get $2 million, some $20 million. Regardless the board evaluates the merit."

He made the comments on the sidelines of the World Economic Forum annual meeting in Davos, Switzerland. His group counts telecoms giant BT, aviation titan Bombardier and automaker Volvo as clients.

So why is such a huge company in India looking to almost start from scratch with these new companies? Gurnani said that these "garage startups" are the new generation of thinkers that, in the long-run, will be key to making sure incumbent companies like Tech Mahindra evolve with the digital revolution — or as the theme of the WEF meeting puts it — "The Fourth Industrial Revolution."

"The number one reason is because this is what our customers want and what are customers' customers want. They want more productivity, efficiency and better value for their budgets," he said.

CP Gurnani"Our customers are constantly looking at new areas of growth and are expanding into new geographies and whatever you want to call it, whether digital disruption or the 'fourth industrial revolution,' we need to be part of that eco system and good ideas come out of of the garages. You have to establish a good relationship with them and listen as they could all potentially help my customers and my customers' customers."

The areas that Tech Mahindra is super-keen in investing in are across artificial intelligence — although the group is key to call this automation — mobility, a software defined network and "the internet of things."

The "internet of things" has become shorthand for an electronic and software network that binds together physical objects, devices, vehicles, and buildings, which also enables these objects to collect and exchange data. Essentially a "smart city."

But this week in Davos, many of the 2,500 delegates from more than 100 countries are keen to point out that there are great risks to the global and company and country infrastructure. 

UBS highlighted in its white paper, entitled "Extreme automation and connectivity: The global, regional, and investment implications of the Fourth Industrial Revolution," that as connectivity between industries and countries increases, so the system becomes more fragile.

davos1The world is open to catastrophic risks that can fell energy grids, companies and even country infrastructure.

However, Gurnani is not as worried as some of his counterparts at the WEF meeting.

"Every opportunity comes with challenges. With horse drawn carriages there were automobiles, with airports, machines checking you in. There are always risks with any technology. But the good news is, is that we are talking about it and can tackle it," said Gurnani.

"For example, if you are an architect, if you are building a house in New Dehli or Johannesburg, you make damn well sure that you make sure that building has good security and high walls. You also have a risk mitigation plan."

So with all this cash for investment and Tech Mahindra's mega-support in cutting edge technology, WEF is the perfect place to do business.

The conference is used as a platform for world leaders, business and industry executives, and representatives from academia, civil society, media, and arts to help shape global agendas when it comes to global economics, security, public health, education, gender parity, and climate change.

It's an opportunity for the power brokers of the world to set the agenda.

However, Gurnani said that he sees it as more of a place to develop ideas rather than landing deals.

"WEF is a brain spa for me. People always say when I come here that WEF is like speed dating for business and sleepless nights, but I don't view it that way," he insisted.

"I have mindful interactions with some of the most influential people in the world."

Join the conversation about this story »

NOW WATCH: Jim Cramer's inspiring words on how to come back from a beatdown

A think tank claims Stephen Hawking and Elon Musk have overhyped AI risks and done a 'disservice' to the public

$
0
0

stephen hawking

Celebrated scientists and entrepreneurs including theoretical physicist Stephen Hawking and Tesla billionaire Elon Musk have been criticised by a US think tank for overstating the risks associated with artificial intelligence (AI).

The Information Technology and Innovation Foundation (ITIF), a Washington DC-based think tank, said the pair were part of a coalition of scientists and luminaries that stirred fear and hysteria in 2015 "by raising alarms that AI could spell doom for humanity." As a result, they were presented with the Luddite Award.

“It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF president Robert Atkinson."Do we think either of them personally are Luddites? No, of course not. They are pioneers of science and technology. But they and others have done a disservice to the public — and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today — by demonising AI in the popular imagination."

The ITIF announced 10 nominees for the award on December 21 and invited the general public to select who hey believed to the "worst of the worst."

Alarmists touting an AI apocalypse came in first place with 27% of the vote.

Professor Hawking, one of Britain's pre-eminent scientists, told the BBC in December 2014 that efforts to create thinking machines pose a threat to our very existence. He said: "The development of full artificial intelligence could spell the end of the human race." He has made a number of similar remarks since.

Business Insider contacted Hawking on his Cambridge University email address to find out what he makes of the award but didn't immediately hear back.

Hawking's friend Musk, meanwhile, who made his billions off the back of PayPal and Tesla Motors, has warned that AI is potentially more dangerous than nukes. artificial intelligence

But they're not the only influential technology leaders with AI concerns.

Microsoft cofounder Bill Gates and Steve Wozniak, the American programmer who developed Apple's first computer, have also given similar warnings.

Hawking and Musk are also supported in their views by Oxford University philosopher Nick Bostrom, who has published a book called "Superintelligence: Paths, Dangers, Strategies." The book argues that true artificial intelligence, if it is realised, could pose a danger to humanity that exceeds every previous threat from technology, including nuclear weapons. Bostrom believes that if AI's development is not managed carefully humanity risks engineering its own extinction.

The timescale for when machines could become as intelligent as humans is murky at best and some, including the ITIF, argue that governments and enterprises should focus on increasing the rate at which AI is being developed instead of worrying about robots taking over the world.

"If we want to continue increasing productivity, creating jobs, and increasing wages, then we should be accelerating AI development, not raising fears about its destructive potential," Atkinson said.

He added: "Raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption. The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back."

ITIF said it created the annual Luddite Awards to highlight the year’s worst anti-technology ideas and policies in action, and to draw attention to the negative consequences they could have for the economy and society.

Join the conversation about this story »

NOW WATCH: How to find Netflix’s secret categories

Facebook users upload so many videos, it had to invent a whole new system to deal with them (FB)

$
0
0

zuckerberg facebook video

Facebook users are uploading so many videos to the social network that its engineers had to come up with an entirely new system for handling them, the company said today. 

Luckily for Facebook's dedicated users, the Streaming Video Engine, as it's called, was able to be put in place behind the scenes, without anyone ever noticing.

All users know is that their videos are getting uploaded and ready for viewing up to 10 times faster than they were before.  

This new Streaming Video Engine was built during Facebook's ramp-up from 1 billion video views per day in January 2014 up to more than 8 billion a day now— a milestone it hit in late 2015.

The key concept is the engine's ability to break up a video into chunks, letting Facebook process and upload several pieces simultaneously. Before, Facebook had to process videos all in one go, giving the system one big chokepoint.

Now that it's in place, Facebook's new Streaming Video Engine handles all the video uploads from all of its apps and services, including the core social network, Facebook Messenger, and Instagram. 

And it's just in time, too, Facebook says — since being put in place late last year, the Streaming Video Engine has already dealt with 3 times the amount of peak video traffic as the old system, Facebook says.

Preparing for the next video wave 

Looking to the future, the Streaming Video Engine is a solid foundation for serving up the next wave of video, Facebook says. That includes recent features like Profile Videos, as well as 360-degree video content that's primed for virtual reality.

Indeed, virtual reality is also a big focus area for Facebook's video teams, given the fact that it owns the forthcoming and much-hyped Oculus Rift VR headset.

To that end, Facebook also today announced new updates to its 360-degree video technology that reduces the file size of uploaded VR videos, while also dramatically cutting down on the buffering time to play them. That's important, given the sheer quantity of data that a 360-degree, high-resolution VR video has to get across.

Oculus Rift Oculus Touch

Finally, Facebook announced today a new approach to its artificial intelligence strategy for chopping up and analyzing videos. Right now, most models for artificial intelligence use a so-called "supervised" model, where AI engineers have to help a computer understand what it's seeing in a video.

But with a new model discussed today, Facebook AI Research's Vision Understanding team thinks it has the foundation for letting the system learn what's in videos on its own, through a novel process where it analyzes each "voxel," or video pixel, and crunches it individually. 

Processing, analyzing, and serving up videos is both a huge challenge and a huge opportunity, especially in the era of streaming video and virtual reality — just ask IBM, which just launched a big video services initiative earlier today.

SEE ALSO: IBM is going after a $105 billion market in its cloud war with Amazon

Join the conversation about this story »

NOW WATCH: Why Mark Zuckerberg will go down as 'one of the most important business leaders of our generation'


Here are some of the greatest achievements of Marvin Minsky, artificial intelligence pioneer

$
0
0

Marvin_Minsky_at_OLPCb

We've lost one of the world's most brilliant scientists.

Marvin Minsky, the MIT scientist who helped pioneer the field of artificial intelligence and laid the foundations for the computer and the internet, has died at 88, The New York Times reports. The cause was a cerebral hemorrhage, according to his family.

"The world has lost one of its greatest minds in science," Nicholas Negroponte, founder of the MIT Media Lab, wrote in an email to colleagues, according to The Washington Post.

Beginning in the 1950s, Minsky began working to create intelligent machines, a field that would come to be known as artificial intelligence, or AI.

AI has become vastly more sophisticated since then, though we have yet to develop a machine that has true general intelligence — the ability to do anything a human being can. But that never daunted Minsky:

"The problem of intelligence seemed hopelessly profound," Minsky told The New Yorker when it profiled him in 1981. "I can't remember considering anything else worth doing."

Here are some of Minsky's greatest achievements in AI:

  • In 1951, Minsky built "SNARC" (short for Stochastic Neural Analog Reinforcement Calculator) — a neural network that could be considered the first artificial learning machine.
  • He co-founded the MIT Artificial Intelligence Project (later renamed the Artificial Intelligence Laboratory) in 1959 with computer scientist John McCarthy, who coined the term "artificial intelligence." The lab was part of the ARPAnet— the precursor to the internet — and helped pioneer the notion that software should be shared freely (aka open-source).
  • In the 1960s, Minsky developed some of the first mechanical arms, laying the foundation for modern robotics.
  • During the early 70s, along with computer scientist Seymour Papert, he developed "The Society of Mind" theory of human intelligence (described in his 1986 book), based on research in developmental child psychology and artificial intelligence.
  • Director Stanley Kubrick consulted Minsky for his film "2001: A Space Odyssey," according to The New York Times — which contains perhaps the most famous AI in film history, HAL 9000.

These were just a few of Minsky's many achievements, which also spanned fields such as computational linguistics, mathematics, and optics.

READ NEXT: This AI can draw alphabet characters as well as a human can

SEE ALSO: https://www.washingtonpost.com/news/speaking-of-science/wp/2016/01/25/marvin-minsky-1927-2016/

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Neil deGrasse Tyson explains what will happen when robots take our jobs

$
0
0


In recent years, society has seen major advances in robotics and computing, which leads some to believe that one day robots will become smart enough to take over all of our jobs. Neil deGrasse Tyson has another theory on how the future of robotics might play out.

Produced by Darren Weaver and Kamelia AngelovaAdditional production by Kevin Reilly.

Follow TI: On Facebook


StarTalk Radio is a podcast and radio program hosted by astrophysicist Neil deGrasse Tyson, where comic co-hosts, guest celebrities, and scientists discuss astronomy, physics, and everything else about life in the universe. Follow StarTalk Radio on Twitter, and watch StarTalk Radio "Behind the Scenes" on YouTube.

Join the conversation about this story »

A Google computer program just destroyed a human champion in a game that's even harder than chess

$
0
0

Go

In what may be one of the bigger achievements in artificial intelligence since IBM's Deep Blue computer beat world chess champion Garry Kasparov in 1997, a computer program developed by the British AI company Google DeepMind has beaten the reigning European champion in the game of Go.

The program, dubbed AlphaGo, beat the human European Go champion Fan Hui by five games to none, on a full-size Go board with no handicap — a feat thought to be at least a decade away, according to a study published Wednesday in the journal Nature.

The AI also won more than 99% of the games it played against other Go-playing programs.

In March, the AI program will go head-to-head with the world's top Go player, Lee Sedol, in Seoul, Korea.

Eventually, the researchers hope to use their AI to solve problems in the real world, whether through making medical diagnoses or by modeling the climate.

But it's still a leap from playing a specific game to solving real-world problems, experts say.

The world's most complex board gamego board

A two-player board game developed in China more than 2,500 years ago, Go is widely considered to be one of the hardest challenges in game AI research.

"It's probably the most complex game ever devised by humans," study coauthor and Google DeepMind CEO Demis Hassabis — an avid Go player himself — said Tuesday in a conference call with reporters.

The game board consists of a grid of intersecting lines (typically 19 x 19 squares, though other sizes are also used). Players play with black and white pieces called "stones," which can "capture" other pieces by surrounding them. The goal of the game is to surround the largest area of the board with your stones by the game's end.

Go is a much more challenging game for a computer than chess. For one thing, Go has 200 possible moves, compared with just 20 in chess. For another, it's very hard to quantify whether you're winning, because unlike chess, you can't just count up what the pieces are worth.

There are more possible board arrangements in Go than there are atoms in the universe, Hassabis said. That means it's impossible to play the game by brute force — trying all possible sequences of moves until you find a winning strategy.

To solve Go, Hassabis, DeepMind's David Silver, and their colleagues had to find a different approach.

Building a Go-playing AI

Their program involved two different neural networks (systems that process information the way neurons do in the brain): a "value network" that evaluates the game board positions, and a "policy network" that selects how the AI should move.

The value network spits out a number that represents how good a particular position is for winning the game, whereas the policy network gives a probability for how likely each move is to be played.

The researchers trained their AI on data from the best players on the KGS Go Server, a game server for playing Go.

The AlphaGo AI relies on a combination of several machine-learning techniques, including deep learning — an approach to learning that involves being exposed to many, many examples. For example, Google once used this approach to train a computer network to recognize cats by watching YouTube videos.

DeepMind combined deep learning with an approach known as Monte Carlo tree search, which involves choosing moves at random and then simulating the game to the very end to find a winning strategy.

Many of today's AIs rely on what's called "supervised learning," which is like having a teacher that tells the program whether it's right or wrong. By contrast, DeepMind trained its AI to learn by itself.

On the conference call, Nature senior editor Tanguy Chouard, who was at Google DeepMind's headquarters when the feat was accomplished, described the atmosphere at the event:

"It was one of most exciting moments in my career," Chouard said. "Upstairs, engineers were cheering for the machine, but downstairs, we couldn't help rooting for the poor human who was being being beaten."

One of AI's pioneers, Marvin Minsky, died earlier this week. "I would have loved to have heard what he would have said and how surprised he would have been" about the news, Hassabis told reporters.

'We're almost done'

Kasparov Deep Blue

Most experts in the field thought a victory in Go was at least a decade away, so it came as a surprise to those like Martin Mueller, a computer scientist at the University of Alberta, in Canada, who is an expert in board game AI research.

"If you'd asked me last week" whether this would happen, Mueller told Business Insider, "I would have said, 'No way.'"

Facebook is reportedly developing an AI to beat human Go players, but Hassabis told reporters that the social-media giant's program wasn't even as good as the current best Go AIs.

Go had been one of the few classical games — possibly the last — in which humans were better than machines.

In February 2015, Hassabis and his colleagues built an AI that could beat people at Atari games from the popular 1980s gaming console.

AI programs have already beaten humans at chess, checkers, and Jeopardy, and they have been making inroads on games like poker.

"In terms of board games, I'm afraid we're almost done," Mueller said.

The AI has yet to prove itself against the world's champion Go player in March, but this is an impressive first step, he added.

The rapid progress in AI over the past few years has spawned fears of a dystopian future of super-intelligent machines, stoked by dire warnings from Stephen Hawking and Elon Musk.

But Mueller's not too concerned yet. As he put it, "It's not Skynet — it's a Go program."

NEXT UP: Here are some of the greatest achievements of Marvin Minsky, artificial intelligence pioneer

NOW CHECK OUT: Google’s AI system created some disturbing images after ‘watching’ the film Fear and Loathing in Las Vegas

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Google's Go win was a massive step forward for AI, but machines have a ways to go before they're smarter than humans

$
0
0

johnny depp transcendence

The computer science world erupted with excitement earlier this week, when an artificial intelligence program made by the Google-owned AI company DeepMind bested the human European champion in the 2,500-year-old board game of Go.

Considered one of the most challenging games in existence, Go has held out as one of the last traditional board games at which humans are better than machines. So Deep Mind's achievement was no doubt impressive, and Facebook has been making strides in this area too.

But AI still has a ways to go before it exceeds human intelligence in everyday life.

Gary Marcus, a neuroscientist at NYU who blogs about AI for the New Yorker, put the achievement into context in a post on Medium.

"With so much at stake as people try to handicap the future of AI, and what it means for the future of employment and possibly even the human race, it's important to understand what was and was not yet accomplished," Marcus writes.

What AI has and hasn't achieved 

For starters, Fan Hui, the Go champion bested by Google's AI program, is not a world champion of the game — he's ranked 633rd globally, according to Marcus. DeepMind does have plans for its AI to go head-to-head with the world champion, Lee Sedol, but not until March.

Still, even if DeepMind's AI beats Sedol in March (which many expect it will), that only means the program is extremely good at Go — it says nothing about more general applications of its intelligence.

As Marcus writes, "The real question is whether the technology developed there can be taken out of the game world and into the real world."

So what would that look like? Ultimately, the DeepMind researchers hope to apply their AI to areas like medicine, where it could one day make medical diagnoses, or to climate, where it could help create statistical models of weather patterns. Those are areas that won't be nearly as easy as beating a board game.

watson jeopardy ibm

IBM's Watson, the AI program that stunned the world by beating the human champions at Jeopardy! in 2011, has been repurposed for medical research. But it's struggling to meet the revenue goals the company set just a few years ago, as The Wall Street Journal reported.

Unlike games like Go or Jeopardy!, the real world doesn't have strict rules. There are no exact answers. Real life requires making decisions based on limited information, something humans are very good at.

Real life requires making decisions based on limited information, something humans are very good at.

DeepMind and many other AI programs today rely on techniques that require them to be trained on massive amounts of data. By contrast, humans can often learn new concepts based on just one or two examples.

Still, there's been some progress. In December, researchers at MIT announced they had created an AI that could learn to draw alphabet letters after seeing just a single example.

But the days when a Terminator-like AI that can rule the world are probably a safe ways off, according to Martin Mueller, a computer scientist at the University of Alberta, in Canada, who's an expert in board game AI research.

So while DeepMind's AI is impressive, Mueller told Business Insider, "it's not Skynet — it's a Go program."

NEXT UP: A Google computer program just destroyed a human champion in a game that's even harder than chess

NOW CHECK OUT: This AI can draw alphabet characters as well as a human can

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Google DeepMind just beat Europe's Go champion but now it wants to take on the world #1

$
0
0

Demis Hassabis DeepMind

It was a result that sent shockwaves through the Go community. AlphaGo, the computer created by DeepMind, the artificial intelligence (AI) arm of Google, thrashed European Go champion Fan Hui 5-0 — the first time a computer program has beaten a professional player of the ancient Chinese game.

Played on a board with a 19x19 grid of black lines, Go is such a complex game that enthusiasts hoped it would be years, or perhaps decades, before machines would be able to triumph over the best human players.

But now that time scale is shortening and AlphaGo is scheduled to play the world’s top player, Lee Sedol, over five games in March.

Mr Lee is a stronger player than Mr Fan and for now remains confident. "This is the first time that a computer has challenged a top human pro in an even game," he said. "I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win, at least this time." It’s the "at least" that’s significant here. The parallels with chess are ominous. IBM’s Deep Blue lost 4-2 to Gary Kasparov the first time they played in Philadelphia in 1996, but triumphed 3.5-2.5 a year later in New York. Mr Lee may not succumb the first or even the second time, but in the end he or a successor will and another bastion will have fallen.

Complexities of Go

Go is such a complicated game that until recently the programs could defeat only amateurs and the Google team had to use a new approach. It now looks ahead by playing out the rest of the game in its imagination, many times over.

The program involves two neural networks, software that mimics the structure of the human brain. It was trained by observing millions of games of Go and evolved to predict expert moves 57% of the time. The network was then set to play against itself, learning from its victories and losses as it carried out more than a million individual games over the course of a day.

This is only possible, of course, due to the huge improvements in computing power in recent decades. And the bottom line is that the machines are, or will soon be, able to defeat the best humans at most games.

We in the chess community have had to deal with this for nearly two decades now, and the solution has been to accept that they will beat us in single combat but work around it.

We know that as human beings we will make small mistakes, however well we play. Chess playing computers ("engines") are uniquely well placed to exploit these and once they have a material advantage are almost totally unplayable. But we can console ourselves that they still don’t create that much themselves and rather than banging our heads against a brick wall, we can use them as superb training agents.

We use "engines" extensively in preparing for games — training in which the crucial element is that the human must lead the machine rather than following.

And after these practice games we always check with the "engines" to find all the mistakes we’ve made and hidden tactics we’ve missed.

Sprinters don’t run against race cars and there’s no reason that human Go or chess players should compete directly against computers.

The Go community will have to adjust but if, like us chess players, they learn to use the new technology then a new generation will surely arise with an understanding and aesthetic that is different and in some ways superior to their predecessors. And with free top class training at their finger tips the best players will develop much younger.

It’s a shock to the human ego that machines can emulate our intelligence. But rather than fight against them we should embrace the opportunities they bring.

Join the conversation about this story »

NOW WATCH: We tried the new value menus at McDonald's, Burger King, and Wendy's — and the winner is clear

Google DeepMind used new AI techniques to navigate a 'Doom'-like maze with apples and portals (GOOG)

$
0
0

DeepMind apples

Not content with mastering centuries-old Chinese board game Go, DeepMind has now pitched an artificial intelligence (AI) agent against a more contemporary game called "Labyrinth."

The Google-owned company, which is based in King's Cross in London, published a video on YouTube showing an AI navigating a computer game with a 3D-maze that looks like a level from the 90s shooter game "Doom."

In the game, the AI is rewarded for finding apples and portals that teleport it to somewhere else in the maze. The AI has to score as many points as possible in a minute.

"This task is much more challenging than [a driving game] because the agent is faced with a new maze in each episode and must learn a general strategy for exploring mazes," DeepMind said in a paper published last week by eight of the company's most prominent academics.

The authors explain that the AI successfully learned a "reasonable strategy for exploring random 3D mazes using only a visual input."

Unlike other AIs that play games independently, this particular AI had no access to the game's internal code, according to New Scientist. That means the AI had to learn the game in the same way that a human would — by looking at the screen and deciding how to move forward from there.

Changing AI tactics

Last year the DeepMind team created an AI capable of learning and playing 49 different games from the Atari 2600 — a gaming console from the 1980s. The AI, which wasn't told the rules of the games and instead had to watch the screen to develop its own strategies, beat the best human scores on 23 of 49 Atari games.

Mastering the Atari games involved using a technique called reinforcement learning, which rewards the AI for taking steps that boost its score, in conjunction with a deep neural network that analyses and learns patterns on the game screen. The AI also used a technique called experience replay, meaning it could look back into its memory and study the outcome of past scenarios.

However, experience replay is hard to scale up to more advanced problems, according to DeepMind's latest paper.

To overcome this issue, DeepMind used a technique called asynchronous reinforcement learning, which involves multiple versions of an AI working together to tackle a problem and compare their experiences.

This approach requires less computing power, according to New Scientist. The AI that beat the Atari games required eight days of training on high spec machines. The new AI achieved better performance on lower spec systems in four days. 

Join the conversation about this story »

NOW WATCH: An Iranian actress posted Instagram photos of herself without a hijab and was forced to flee the country

These GIFs will convince you that robots won't take over the world any time soon

$
0
0

Robot fails

If you're like Elon Musk and Stephen Hawking and fear the impending war between humans and killer robots, watch this video.

It just may convince you that robots have a long way to come before they're going to be able to think, walk, and act like humans.

From overzealous bartenders to not-so-agile soccer players, check out some hilarious robot fails from this video uploaded to YouTube by user Sky Walker.

Robots have been steadily getting smarter and smarter in recent years, especially when they perform complex thought behaviors such as chess, Jeopardy, and even stock trading.



But ask them to perform a basic physical human-like task ...

RAW Embed

Source: Tech Insider



And they completely fall apart.

RAW Embed



See the rest of the story at Business Insider

How Google's AI is teaching itself to play computer games like a human

$
0
0

Google DeepMind artificial intelligence maze game

Google has made massive strides in refining its artificial intelligence, DeepMind, in just the last year.

The most recent example of that fact took place in late January, when DeepMind was able to beat a human for the very first time at the complex game of Go.

But last Thursday Google showed yet another indicator of how far its AI has advanced: its ability to master computer games like a human.

Here's a breakdown of what the AI mastered and what it means for the future:

Google's AI first made waves in February 2015 when it learned to play and win games on the Atari 2600 without any prior instructions on how to play.

The computer beat all human players in 29 Atari games, and performed better than any other known computer algorithm in 43 games.

AI researchers have told Tech Insider multiple times that this was the most impressive technology demonstration they've ever seen.



The AI was able to master the Atari games by combining reinforcement learning with a deep neural network.

Reinforcement learning is when AI is rewarded for taking steps to improve its score. Combining this technique with a deep neural network, which is when the AI analyzes and learns patterns on the game screen, allowed DeepMind to master the Atari games.



But it's difficult to use that technique to solve more advanced issues — so the researchers came up with a new plan.

The AI instead used asynchronous reinforcement learning, which is when it sees multiple versions of AI tackling a problem and sees what method works best.

Here we see that tactic being used in a driving computer game.



See the rest of the story at Business Insider

Microsoft's latest iPhone app can identify your dog's breed based on a photo (MSFT)

$
0
0

dog

Microsoft introduced a new app on Thursday that anyone with a dog should play with because it's a lot of fun.

It's called Fetch!, and it's available for iPhones and on the web. It uses artificial intelligence techniques to classify images of real-world dogs into breeds. On the web, users can upload a photo of a dog, or you can take a picture of your pet using your phone's camera.

If you upload a picture of, say, a Rhodesian ridgeback, Microsoft should be able to confirm the dog's breed. 

It's the latest in a line of fun, silly apps released by Microsoft Gargage, an "outlet for experimental projects" that are designed to show off creative and unexpected ways to apply Microsoft's expertise in artificial intelligence. In the past year, Microsoft has released apps that detect and measures mustaches in photos or guesses your age, for example.

Like Microsoft's other AI apps, Fetch! should become more accurate as users upload more photos and data. More technical information is available here.

Fetch! is already fairly accurate. Here it identified the breed of a dog belonging to one of BI's reporters.

havanese

But it thinks my cat is a dog with a "99% match." Incorrect answer, Fetch!

cat is not a dog

And the app has a sense of humor. When it's fed an image that's clearly not a dog, such as a flower, it will try to identify it as well as possible. If you upload a photo of a person, it'll make a joke:

IMG_3297

Dog classification is a classic challenge for AI researchers. Well-organized datasets for dog breeds are free and widely available. The prestigious ImageNet competition today requires entrants from places like Stanford and Google to identify many different types of images, but back in 2011, it only had one category: dog classification

SEE ALSO: Microsoft has a new website that guesses your age — it's a lot of fun to play around with

Join the conversation about this story »

NOW WATCH: This Google app could forever change the way you travel

Machines may replace half of human jobs

$
0
0

robot workers

In just 30 years, intelligent machines could replace about half of the global workforce.

That was the message Moshe Vardi, a computer science professor at Rice University and Guggenheim fellow, shared during a presentation at the American Association for the Advancement of Science annual meeting on Saturday in Washington D.C.

"We are approaching the time when machines will be able to outperform humans at almost any task," Vadri said, according to a report from The Guardian. "Society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?"

From drivers to sex workers, no job is safe, Vardi and other scientists warned.

"Are you going to bet against sex robots? I would not," Vardi added.

Vardi, of course, is not the first to warn of the loss of middle class jobs because of a rise in automation.

In fact, according to research published by McKinsey last year, as much as 45% of current jobs could be replaced using technology that already exists.

And in an Oxford University study published in 2013, researchers predicted that it could only take 10 to 20 years before almost 50% of jobs in the US are computerized.

Robot bear nurse robobearWendell Wallach, a consultant, ethicist, and scholar at the Yale University Interdisciplinary Center for Bioethics, told Tech Insider last year that we had reached a tipping point where technology is now destroying more jobs than it creates. And if the trend continues, Wallach warned we could face a serious crisis in the US and abroad.

"When people no longer receive the money from wages they need to support their families, it is hard to know what they will do, but in the past and in other countries this has been thought of as a situation ripe for a revolution," Wallach told Tech Insider.

Vadri had a similar message when he spoke over the weekend.

Vadri said that political leaders have largely ignored the reality that automation will continue to up end the employment landscape in the United States.

"We are in a presidential election year and this issue is just nowhere on the radar screen," he said.

He added that as machines replace humans in more occupations, they will ultimately be forced to confront their greatest challenge yet, which is finding meaning in life without the purpose of work.

"We need to rise to the occasion and meet this challenge," he said.

Join the conversation about this story »

NOW WATCH: A hotel in Japan is staffed by robots

Robots will steal your job: How AI could increase unemployment and inequality

$
0
0

bumblebee transformers age of extinction

The future is supposed to be a glorious place where robot butlers cater to our every need and the four-hour work day is a reality.

But the true picture could be much bleaker.

Top computer scientists in the US warned over the weekend that the rise ofartificial intelligence (AI) and robots in the work place could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.

And a recent report from Citi, produced in conjunction with the University of Oxford, highlights how increased automation could lead to greater inequality.

'If machines are capable of doing almost any work humans can do, what will humans do?'

The rise of robots and AI in the work place seems almost inevitable at the moment. At a conference on financial technology I attended last week pretty much every startup presenting included AI in some form or another and the World Economic Forum made "The Fourth Industrial Revolution" the topic of its Davos conference this year.

But The Financial Times reports that Moshe Vardi, a computer science professor at Rice University in Texas, told the American Association for the Advancement of Science over the weekend:

We are approaching the time when machines will be able to outperform humans at almost any task. Society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?

A typical answer is that we will be free to pursue leisure activities. [But] I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being.

Professor Vardi is far from the first scientist to warn about the potential negative effects of AI and robotics on humanity. Tesla founder Elon Musk co-founded a non-profit that will "advance digital intelligence in the way that is most likely to benefit humanity as a whole" and Professor Stephen Hawking told the BBC in December 2014:"The development of full artificial intelligence could spell the end of the human race."

The World Economic Forum also backed up Professor Vardi's fears with a report released last month warning that the rise of robots will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020.

'Increased leisure time may only become a reality for the under-employed or unemployed'

All these findings are fears are echoed in a recent research note put out by Citibank and co-authored by two co-directors and a research fellow of the University of Oxford's policy school, the Oxford Martin School.

Citi's global equity product head Robert Garlick writes in the report:

Could automation increase leisure time further whilst also maintaining a good standard of living for everyone? The risk is that this increased leisure time may only become a reality for the under-employed or unemployed.

The report, released last month and titled "Technology at work: V2.0", concludes that 35% of jobs in the UK are at risk of being replaced by automation, 47% of US jobs are at risk, and across the OECD as a whole an average of 57% of jobs are at risk. In China, the risk of automation is as high as 77%.

Most of the jobs at risk are low-skilled service jobs like call centres or in manufacturing industries. But increasingly skilled jobs are at risk of being replaced. The next big thing in financial technology at the moment is "roboadvice"— algorithms that can recommend savings and investment products to someone in the same way a financial advisor would. If roboadvisors take off it could lead to huge upheavals in that high-skilled profession.

Garlick writes:

The big data revolution and improvements in machine learning algorithms means that more occupations can be replaced by technology, including tasks once thought quintessentially human such as navigating a car or deciphering handwriting.

Citi automationOf course, these are theoretical risks — technology exists or is in within reach that means these jobs could be done by robots and machines, but it doesn't necessarily mean they will be. And the report is, in general, optimistic about the future of automation and robotics in the work place.

But Citi says governments and populations are going to have to prepare for these changes, which are going to hit the world of work faster than technology advances have in the past.

The report predicts that many workers will have to retrain in their lifetime as jobs are replaced by machines. Citi recommends investment in education as the single biggest factor that could help mitigate the impact of increased automation and AI.

'Inequality between the 1% and the 99% may widen as workforce automation continues'

But within that recommendation Citi hints at the biggest issue associated with rising robotics and automation in the workplace — inequality.

Citi's Garlick says that "unlike innovation in the past, the benefits of technological change are not being widely shared — real median wages have fallen behind growth in productivity and inequality has increased."

He writes later:

The European Centre for the Development of Vocational Training (Cedefop) estimated that in the EU nearly half of the new job opportunities will require highly skilled workers. Today’s technology sectors have not provided the same opportunities, particularly for less educated workers, as the industries that preceded them.

Not only is technology set to destroy low-skilled jobs, it will replace them with high-skilled jobs, meaning the biggest burden is on the hardest hit. The onus will be on low-earning, under-educated people to retrain for high-skilled technical jobs — a big ask both financially and politically.

Carl Benedikt Frey, co-director for the Oxford Martin Programme on Technology and Employment, writes in the Citi report (emphasis ours):

The expanding scope of automation is likely to further exacerbate income disparities between US cities. Cities that exhibited both higher average levels of income in 2000 as well as the average income growth between 2000 and 2010, are less exposed to recent trends in automation.

Thus, cities with higher incomes, and the ones experiencing more rapid income growth, have fewer jobs that are amenable to automation. Similarly, cities with a higher share of top-1% income earners are less susceptible to automation, implying that inequality between the 1 percent and the 99 percent may widen as workforce automation continues. In contrast, cities with a larger share of middle class workers also are more at risk of computerisation.

Hence, new jobs have emerged in different locations from the ones where old jobs are likely to disappear, potentially exacerbating the ongoing divergence between US cities. Looking forward, this trend will require workers to relocate from contracting to expanding cities.

And not only will the less well-off be forced to make the most changes in the robot revolution — reeducating and relocating — those that do retrain will be competing for fewer and fewer jobs.

Here's the Citi report:

This downward trend in new job creation in new technology industries is particularly evident starting in the Computer Revolution of the 1980s. For example, a study by Jeffery Lin suggests that while about 8.2% of the US workforce shifted into new jobs during the 1980s which were associated with new technologies; during the 1990s this figured declined to 4.4%. Estimates by Thor Berger and Carl Benedikt Frey further suggest that less than 0.5% of the US workforce shifted into technology industries that emerged throughout the 2000s, including new industries such as online auctions, video and audio streaming, and web design.

The study suggests that new technologies are creating fewer and fewer jobs and it is likely that advances in automation and AI will destroy jobs at a much faster rate than it creates new roles.

Citi says "forecasts suggesting that there will be 9.5 million new job openings and 98 million replacement jobs in the EU from 2013 to 2025. However our analysis shows that roughly half of the jobs available in the EU would need highly skilled workers."

Yes, automation and robotics will bring advances and benefits to people — but only a select few. Shareholders, top earners, and the well-educated will enjoy most of the benefits that come from increased corporate productivity and a demand for technical, highly-skilled roles.

Meanwhile, the majority of society — middle classes and, in particular, the poor — will experience significant upheaval and little upside. They will be forced to retrain and relocate as their old jobs are replaced by smart machines.

All hail our new robot overlords!

Join the conversation about this story »

NOW WATCH: Watch Martin Shkreli laugh and refuse to answer questions during his testimony to Congress

Here's how the robot in 'Ex Machina' was created for a film with a budget of just £10 million

$
0
0

Ex Machina1

Anyone with an appreciation for artificial intelligence in movies will likely have been impressed with humanoid robot "Ava" in the Hollywood film "Ex Machina."

"Ex Machina" was made on a budget of just £10 million, which meant that software had to be used to create things that might have been physically made if the film had a higher budget.

Visual effects firm Double Negative created Ava using advanced motion tracking technology. Double Negative's Andrew Whitehurst, the visual effects supervisor on "Ex Machina" and an Oscar nominee, told the BBC: "We made decisions with the way that Ava was designed that if it had been a massive budget film we wouldn’t have done because you can do whatever you like."

Whitehurst explained that "Ava" is essentially a combination of Alicia Vikander, the Oscar-nominated actress that played Ava, and computer-generated imagery. He said the movement of certain parts of Ava's anatomy, including her chest and shoulders, are hard to replicate using software so Whitehurst said he always tried to use Vikander in these instances.

"Once we’d shot with the actors, we’d ask them to step out and then we’d shoot a 'clean plate', which is where we try and copy the camera move but with none of the actors in there," Whitehurst added. "We could use that then to paint out the parts of Alicia that we wanted to be rid of. That gave us a clean background that we could put the CG robot internals over the top of."

In the film, Ava is created by Nathan Bateman (Oscar Isaac), the founder and CEO of software company, BlueBook.

Bateman invites his employee Caleb (Domhnall Gleeson) to put Ava through the Turing test, which is designed to test an AI's ability to persuade the tester it is human.

"Ex Machina" has gone on to win widespread critical praise and earned over £24 million at the worldwide box office.

While the film failed to win any BAFTA's last night, it is still up for a number of Oscars.

Join the conversation about this story »

NOW WATCH: A teen built a KFC chicken vending machine made entirely of Lego blocks — here's how it works

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>