Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

What science fiction like 'Westworld' and 'Black Mirror' tells us about our near future

$
0
0

Evan Rachel Wood as Dolores westworld   credit HBO

From Humans to Westworld, from Her to Ex Machina, and from Agents of S.H.I.E.L.D to Black Mirror— near future science fiction in recent years has given audiences some seriously unsettling and prophetic visions of the future. According to these alternative or imagined futures, we are facing a post-human reality where humans are either rebelled against or replaced by their own creations. These stories propose a future where our lives will be transformed by science and technology, redefining what it is to be human.

The near future science fiction sub-genre imagines a future only a short time away from the period in which it is produced.

Channel 4/AMC's Humans imagines a near future or alternative world where advanced technology has led to the development of anthropomorphic robots called Syths that eventually gain consciousness. As the Synths become increasingly indistinguishable from humans, the series explore notions of what it is to be human: societally, culturally, and psychologically.

The second series was particularly concerned with the rights associated with being able to think and feel — and the right to a fair trial. Odi, an outdated NHS caregiving Syth who features in both seasons, chooses a form of suicide (returning to his original setting and rejecting consciousness) as he can't deal with his new reality.

The robots that inhabit the future theme park in Westworld are also introduced as the playthings of the super-rich. In both cases, the fictional scientists who have created these androids, David Elster (Humans) and Robert Ford (Westworld), purposely design or make provisions for their creations to "become human" with a variety of intentions and both utopian and dystopian possibilities. Both series question the distinction between "real" and "fake" consciousness and the complexities of having a creation come to life.

Too believable

The challenge of near future science fiction is that for it to be believable it needs to closely align to the latest developments in science and technology. This means that it has the potential to become obsolete or even come to pass in the lifetime of its creator. News reports and commentaries from scientists such as Stephen Hawking about the dangers of AI and concerns that"humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own" make the fears articulated on screen seem more real and more frightening.

Some of today's most popular science fiction takes real-world science and follows it to a possible conclusion, showing it can have a direct impact on each of our lives rather than on just on far future global and intergalactic events. Stories about the near future have proliferated because they are popular with audiences and filmmakers alike. They allow for discussions of the implications of believable changes, such as the artificially intelligent operating system Samantha (voiced by Scarlett Johansson) in the film Her, or the thought-controlled contact lenses that appear in various forms in episodes of Charlie Brooker's Black Mirror.

These near-future fictions offer prescient alternatives to other science fiction set in the far future. Consider, for example, the alarming relevance of The Handmaid's Tale. The novel by Margaret Atwood has been adapted for the small screen and will air in late April. It is set in a gender-segregated, theocratic republic that is fixated on wealth and class. Women are rated according to their ability to reproduce in a near-future where environmental disasters and rampant sexually transmitted diseases have rendered much of the population infertile.

Amid growing fears of religious conservatism in Trump's America, Samira Wiley, one of the stars in the new adaptation, remarks that it "is showing us the climate we're living in [and] specifically, women and their bodies and who has control of our bodies".

Science fiction's alternate worlds and imagined futures — whether dystopian or utopian — force audiences to look upon their own reality and consider how changes in our societies, technologies, and even our own bodies might take shape and directly influence our own future. Whether presenting a positive or negative future, science fiction attempts to provoke a response, highlighting issues that need to be dealt with by everyone, not just by scientists and governments.

Past shock

In some senses, science fiction has caught up with us. The idea that we might be able to have android servants, or a personal bond with our computers has been crystallised by Apple's personal assistant Siri. Research into self-healing implants has brought the prospect of enhancing our bodies to make us more than human ever closer.

The future isn't as far-fetched as it used to be and it often feels like the futures we see on screen should already be here, or are already here even when they aren't. We are perhaps shifting from what the futurist Alvin Toffler termed "future shock" to a sort of "past shock".

Toffler defined future shock as "too much change in too short a period of time"— an overwhelming psychological state that both affects societies and individuals who cannot keep up with and comprehend the speed of technological change that seems to constantly redefine conceptions of the self and society. But we might now be entering an age of "past shock" where we are able to imagine and accept technological changes well before they were developed or even patented. The shock is no longer at the speed of technological change, but rather its apparent slowing, as scientists cannot keep up with our own imagined futures.

As the line between real-world science and science fiction becomes increasingly fluid, the future is closer that it has ever felt before.

Amy C. Chambers, Research Associate in Science Communication & Screen Studies, Newcastle University

This article was originally published on The Conversation. Read the original article.

SEE ALSO: How psychedelics like psilocybin and LSD actually change the way people see the world

Join the conversation about this story »

NOW WATCH: 'It's a lie': Jake Tapper calls out Trump during a fiery interview with Kellyanne Conway


Netflix is using AI to make the videos you watch on your phone look a whole lot better (NFLX)

$
0
0

stranger things eleven netflix angry psychic girl teen

BARCELONA, Spain — The quality of the Netflix videos you watch on your smartphone is about to get a whole lot better.

The American video-streaming-and-rental company is going to make use of artificial intelligence (AI) to improve how it encodes its videos on a scene-by-scene basis for mobile, it announced this week — cutting down the amount of data required to stream video, and letting users on slow connections view better-quality video.

Working with the University of Southern California, Netflix conducted a study that showed test subjects shots at different qualities, and asked them to judge which one looked better.

It then used these results to train an AI neural network on what quality footage looks like — which then goes through Netflix's videos scene-by-scene to make them use as little data as possible without sacrificing file quality.

This is possible because not all video requires the same amount of data to look good. In a press briefing in Barcelona, Spain, on Wednesday, Netflix vice president of product innovation Todd Yellin used two examples: "Daredevil" and "Bojack Horseman."

"Daredevil" is a live-action drama full of special effects and complex scenes, while "Bojack Horseman" is a cartoon. Clearly, it should take less space to save a video of Bojack Horseman than Daredevil, because there's less detail in any given scene that requires captured.

And because the AI encoding done shot-by-shot, it means shows of varying visual complexity can be optimised so they don't take up unnecessary bandwidth while still having sufficient detail when it really matters.

Yellin showed a demonstration — the trailer for upcoming Marvel action series "Iron First." One was encoded traditionally, while the other had been given the AI treatment. The visual quality was roughly the same (the traditional one was perhaps a tiny bit better, but it was hard to tell) — but the traditional one was a 555kbps stream, while the AI-powered one was 227kbps, half the size.

The feature isn't available today, but will be rolling out in the next couple of months (between two and five months, Yellin said). It will be utilised for mobile video at first, though there are apparently plans to bring it to desktop and smart TV viewing as well, and the viewer doesn't need to do anything — it works automatically in the background.

Join the conversation about this story »

NOW WATCH: Apple just released this beautiful drone video of its new 'spaceship' campus

Experts say these 5 skills are 'robot proof'

$
0
0

alphaGo match

The rise of artificial intelligence (AI) in machine learning and natural language processing technologies will have a huge impact on the media, advertising, retail, finance, and healthcare industries in the near future.

Technology research firm Gartner claims that 85% of all customer interactions won't require human customer service reps by the end of this decade.

You may be thinking creativity makes you robot proof, because robots cannot be creative, but that notion is no longer true. A composer told me recently his work as a carpenter is robot-proof, not his career as a composer.

IBM's Watson has co-produced a movie trailer with 20th Century Fox. The Painting Fool, a computer program, has been taught to recognize human emotion, and respond accordingly. A team of Microsoft researchers have taught the computer to analyze the work of Rembrandt before producing an original painting in the same style, and printing it in 3D to give it the same texture as an oil painting.

You may argue that robots cannot make complex decisions, but we now have AI judges weighing legal evidence and moral questions of right and wrong to accurately predict the result in hundreds of real life cases.

How will we compete with robots?

We will always need heart and soul human connections in the workplace. Dov Seidman, C.E.O. of LRN in a New York Times article said,

"Our highest self-conception needs to be redefined from "I think, therefore I am" to "I care, therefore I am; I hope, therefore I am; I imagine, therefore I am. I am ethical, therefore I am. I have a purpose, therefore I am. I pause and reflect, therefore I am."

Marty Neumeier, branding expert and author of "Metaskills: Five Talents for the Robotic Age," says these five metaskills — these highly human abilities — are the best bulwark against business or career obsolescence."

  1. Feeling: empathy & intuition
  2. Seeing: seeing how the parts fit the whole picture (a.k.a. systems thinking)
  3. Dreaming: applied imagination, to think of something new
  4. Making: Creativity, design, prototyping and testing
  5. Learning: learning how to learn (the opposable thumb of all other Metaskills)

These skills are at the heart of human-centered design thinking, and complement the five discovery skills of outperforming disruptive innovators identified in the Innovator's DNA: Questioning, observing, associating, experimenting and networking.

Where are you on the Robot Curve?

Neumeier's Robot Curve is a simple model of innovation that shows how new processes, businesses, and technologies continuously destroy old ones as they create new opportunities for wealth. Where are you positioned on the robot curve, and how can you optimize your business or career?

There are two ways you can optimize: 1) By keeping your skills or products moving toward the top of the curve, or 2) by designing or managing skills or products at the bottom of the curve.

'Learnability' could save your job — and your company

"It's time to take a fresh look at how we motivate, develop and retain employees. In this environment, learnability — the desire and capability to develop in-demand skills to be employable for the long-term — is the hot ticket to success for employers and individuals alike," says Mara Swan, executive vice president, global strategy and talent, manpower group in her World Economic Forum article.

To ensure you and your kids are robot proof, make it a habit to keep learning. Activate your curiosity by cultivating a wide range of interests. Don't just read, try out new experiences outside your comfort zone. Nothing awakens your senses more than taking a leap into the unknown and making new discoveries.

SEE ALSO: Robots threaten jobs from truck driver to wealth manager — and it changes how graduates should approach the working world

Join the conversation about this story »

NOW WATCH: Terry Crews explains how intermittent fasting keeps him in shape

Salesforce will be using IBM Watson to make its Einstein AI service even smarter (IBM, CRM)

$
0
0

Marc Benioff and Ginni Rometty

Watson and Einstein are teaming up.

The two artificial intelligence products from IBM and Salesforce, respectively, are being brought together as part of a new partnership between the tech companies. 

Besides creating a tag team named after two familiar characters from literature and science, the AI partnership is designed to help retailers crunch a broad variety of data — including customer shopping preferences, weather data and industry information — to boost business.

IBM and Saleforce describe the new partisanship like this:

By combining local shopping patterns, weather and retail industry data from Watson with customer-specific shopping data and preferences from Salesforce Einstein, a retailer will be able to automatically send highly personalized and localized email campaigns to shoppers.

IBM is also making weather data available to Salesforce customers as a service, to help them analyze how weather events impact their business.

This new partnership comes after IBM acquired huge Salesforce partner Bluewolf Group, with Salesforce CEO Marc Benioff's blessing, a year ago, reportedly spending $200 million on that deal. IBM had been a consulting partner for Salesforce, but the acquisition really upped its game there.

As companies increasingly ditch old-fashioned software and opt for the cloud instead, Salesforce has been a big winner.  The classic software projects that previously made up much of IBM's consulting business (and a good chunk of its hardware sales, such as giant SAP installations), has been the big loser.

The good news for IBM: not only are companies buying Salesforce's cloud apps, but they are paying consultants big bucks to do all sorts of custom apps and integration work for them. At the time of Bluewolf acquisition, IBM said the Salesforce professional services industry was projected to be a whopping $111 billion market. 

By bringing its all-important Watson service to Salesforce and Einstein customers, IBM is determined to double-down on that huge Salesforce consulting market, not compete with it.

SEE ALSO: This founder left his $4 billion company before the IPO because he had an even better idea

Join the conversation about this story »

NOW WATCH: We tried on Pizza Hut's new Bluetooth-enabled sneakers that let you order delivery with just a push of a button

59 impressive things artificial intelligence can do today

$
0
0

Chess board

2050.

That’s the year in which artificial intelligence will be able to perform any intellectual task a human can perform, according to one survey of experts at a recent AI conference. Anything and everything any person has ever done in all of history — all of it doable, by 2050, by intelligent machines.

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one.

What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around.

Here's what AI can do:

SEE ALSO: Asia could take the lead in artificial intelligence tech this year

What AI can do: Everyday human stuff

👓 Recognize objects in images

🗺 Navigate a map of the London Underground

👂 Transcribe speech better than professional transcribers

🌎 Translate between languages

😮 Speak

Pick out the bit of a paragraph that answers your question

😡 Recognize emotions in images of faces

🙊 Recognise emotions in speech



Travel

🚘 Drive

🚁 Fly a drone

🅿️ Predict parking difficulty by area



Science & medicine

💊 Discover new uses for existing drugs

🚑 Spot cancer in tissue slides better than human epidemiologists

💉 Predict hypoglycemic events in diabetics three hours in advance

👁 Identify diabetic retinopathy (a leading cause of blindness) from retinal photos

🔬 Analyze the genetic code of DNA to detect genomic conditions

🕵 Detect a range of conditions from images

⚛️ Solve the quantum state of many particles at once



See the rest of the story at Business Insider

Uber's top AI executive is the latest to step aside after four months with the company

$
0
0

Uber office employee

Gary Marcus, the much-celebrated hire that was in charge of Uber's AI Labs, is stepping down from his position at the company after a short four months. In a Facebook post, Marcus said he will now become a "special advisor" to Uber's AI efforts. 

"Great news - I am headed back to my family in New York! I've negotiated a new role with Uber as Special Advisor for AI that gives me more flexibility. So proud of the Geometric team - it has been great working with them," Marcus wrote.

Marcus joined Uber in December 2016 to much fanfare from the company. Uber acquired his 15-person startup, Geometric Intelligence, and brought Marcus in to lead its new AI Lab. No terms of the deal were disclosed at the time.

"In spite of notable wins with machine learning in recent years, we are still very much in the early innings of machine intelligence," Uber exec Jeff Holden wrote in a blog post. "The formation of Uber AI Labs, to be directed by Geometric’s Founding CEO Gary Marcus, represents Uber’s commitment to advancing the state of the art, driven by our vision that moving people and things in the physical world can be radically faster, safer and accessible to all."

At the time, Marcus told the Wall Street Journal that he planned to hire aggressively and eventually open an office in the UK. Yet, after four months, Marcus is no longer leading Uber's AI efforts, as first reported by Axios

Uber confirmed his departure and a spokesman reiterated that the company remains committed to its AI Lab and is super excited to have the Geometric Intelligence team on board.

His departure is the latest in a string of high-profile names to leave the company. Former Twitter engineer Raffi Krikorian stepped down from his role as a senior director of engineer at Uber's Advanced Technologies Center in late February. On Wednesday, it was announced another key member of Uber's self-driving team, Charlie Miller, had left Uber to join Chinese rival Didi's self-driving car lab.

Uber's also had two executives resign as the company investigates sexual harassment and gender bias in its workplace. Last week, Amit Singhal was asked to resign as SVP of engineering by Travis Kalanick after it was revealed he didn't inform Uber about previous allegations of sexual assault. Later that week, Uber's VP of Product and Growth Ed Baker also resigned under mysterious circumstances.

SEE ALSO: Uber's unraveling: The stunning, 2 week string of blows that has upended the world's most valuable startup

Join the conversation about this story »

NOW WATCH: We took a ride in Uber’s new self-driving car on the streets of San Francisco — here's what it was like

IBM speech recognition is on the verge of super-human accuracy

$
0
0

ibm watson ginni rometty

In the world of speech recognition software, 5.1% is kind of a magic number.

Companies that can create software with error rates falling in that ballpark are essentially matching the capabilities of humans, who miss roughly 5% of the words in a given conversation.

On March 7, IBM announced it had become the first to home in on that benchmark, having achieved a rate of 5.5%. The breakthrough signals a big win for artificial intelligence that could eventually live in smartphones and voice assistants like Siri, Alexa, and Google Assistant.

"The ability to recognize speech as well as humans do is a continuing challenge, since human speech, especially during spontaneous conversation, is extremely complex," Julia Hirschberg, a professor of computer science at Columbia University, told IBM in a statement.

Over the last year, IBM has worked to break its former record of 6.9%. In order to cut the error rate by nearly 1.5 percentage points, the company fine-tuned aspects of its acoustics, which pick up different forms of speech.

Though experts like Hirschberg say machines still can't pick up certain nuances of speech, such as tone and metaphor, software has made considerable advances in rote transcription. And the tests aren't feeding machines softballs: In the latest assessment, software had to discern what humans were saying in everyday contexts, such as buying a car, which were littered with stutters, ums, and mumbling.

IBM says the 5.5% claim to fame is especially important in an industry that often can't agree what humans are capable of.

"Others in the industry are chasing this milestone alongside us, and some have recently claimed reaching 5.9 percent as equivalent to human parity," wrote IBM research scientist George Saon.

In 2016, researchers from Microsoft announced they had built a computer that could actually beat humans at understanding conversation. The software had an error rate of 6.3%, well above IBM's new record.

But given the 5.1% goal IBM has set for itself, Saon continued, "we're not popping the champagne yet."

SEE ALSO: IBM is working on a robot that takes care of elderly people who live alone

Join the conversation about this story »

NOW WATCH: This trick lets you find who a lost iPhone belongs to

Stephen Hawking: We need a 'world government' to stop the rise of dangerous artificial intelligence

$
0
0

stephen hawking getty artificial intelligence robots software shutterstock

  • Stephen Hawking is concerned about the growing role of intelligent software in society.
  • He suggests a form of world government might help regulate and control artificial intelligence's rapid expansion.
  • Experts gave mixed reactions to Hawking's suggestion.

Physicist Stephen Hawking may be a proponent of artificial intelligence, but he has also been outspoken about the potential challenges it creates.

In a recent interview, he sounded a similar tone, and offered a solution that conservatives may find hard to accept.

Speaking to The Times of London to commemorate being awarded the Honorary Freedom of the City of London, a title that was conferred on him on Monday, Professor Hawking expressed optimism for the future. He added, however, that he is concerned about artificial intelligence (AI), as well as other global threats.

His answer: international action, and possibly world government.

"We need to be quicker to identify such threats and act before they get out of control," Hawking said. "This might mean some form of world government."

He cautioned, however, that such an approach "might become a tyranny."

Banding together to stop software run amok

united nations inside

As the role of artificial intelligence in society grows, computer scientists and policymakers are moving from constructing these systems to harnessing their power for the good of society.

Though observers are divided on the nature and scope of AI-related challenges, there is widespread agreement that these impacts need to be addressed.

Might world government provide a solution?

"Yes, I think much improved global governance may be necessary to deal with not only advanced AI, but also some of the other big challenges that lie ahead for our species," writes Nick Bostrom, a professor at the University of Oxford who is the founding director of the university's Future of Humanity Institute, in an email to The Christian Science Monitor. "I'm not sure we can survive indefinitely as a world divided against itself, as we continue to develop ever more powerful instruments."

Today, AI is involved in seemingly everything.

It's behind the advances in autonomous vehicles, it powers Facebook's ad screening processes, and it interacts with people everywhere through virtual assistants like Apple's Siri and Amazon's Alexa. In New York City, it's predicting fires, and in Britain, machine learning is being deployed to get people to pay their debts. Ultimately, it could even eradicate persistent social challenges like disease and poverty, Hawking previously indicated.

But with these unique opportunities come unique problems, observers suggest. Part of the concern is about the economic transition to a world dominated by machines will look like.

"There are two main economic risks: first, that a mismatch may develop between the skills that workers have and the skills that the future workplace demands; and second, that AI may increase economic inequality by increasing the return to owners of capital and some higher-skill workers," Edward Felten, a professor of computer science and public affairs at Princeton University who is the founding director of the university's Center for Information Technology Policy, tells the Monitor in an email.

Those issues, he suggests, could be addressed by adopting public policies that will distribute the benefits of increased productivity.

What Hawking was more likely alluding to in his comments, however, are the concerns that AI will become hyper-powerful and start behaving in ways that humans cannot control.

Controlling less obvious threats

artificial intelligence robot

But not everyone is convinced that the overbearing machines of science fiction are a necessary eventuality: Professor Felten says he doesn't see "any sound basis in computer science for concluding that machine intelligence will suddenly accelerate at any point."

Amy Webb, founder and chief executive officer of the Future Today Institute, takes these threats more seriously.

One of the goals of AI, she explains to the Monitor, is to teach machines to connect the dots for themselves in "unsupervised learning" systems. That means placing a lot of trust in these AI systems' ability to make the right decisions.

She offers the analogy of a student learning math in a classroom:

"What happens if a student incorrectly learns that the answer to 1 + 1 is 3, and then teaches the next group of kids? That wrong answer propagates, and other decisions are based on that knowledge."

And the stakes are higher than in math class, she adds: "We will be asking them to make decisions about personal identification, security, weapon deployment, and more."

Professor Bostrom frames the problem facing AI today as one of "scaleable control: how to ensure that an arbitrarily intelligent AI remains aligned with the intentions of its programmers."

Solutions may already be on the horizon

robot hand

There is a small but growing field of research addressing these problems, these commentators explain – and world government or international harmonization of AI laws may be one approach.

Though Bostrom says he does not expect "any imminent transformation in global affairs," world government may be just the next phase of political aggregation.

"We've already come most of the way – from hunter-gatherer band, to chiefdom, to city-state, to nation-state, to the present mesh of states and international institutions," he writes. "One more hop and we are there."

Ms. Webb, though she agrees that international cooperation would be valuable, is skeptical it will happen soon enough to address immediate issues.

"It would be great for all countries around the world to agree on standards and uses for AI, but at the moment we can't even get unilateral agreement on issues like climate change," she points out. It will take time for international government cooperation to catch up with AI development, she says.

Government decisions may also be affected by unexpected changes in human behavior as AI becomes more ubiquitous, notes Scott Wallsten, president and senior fellow at the Technology Policy Institute, in an email.

"Will safer cars based on AI cause people to respond by acting more recklessly, like crossing against a light if they believe cars will automatically stop?" Dr. Wallsten asks.

With that in mind, he suggests, effective policy solutions at local, national, or international level should start with more research into the effects of AI.

"Any initiatives to address potential challenges need to be based on a solid understanding of what problems need to be solved and how to solve them in a way that makes sense," he concludes.

SEE ALSO: Watch a haunting MIT program transform photos into your worst nightmares

DON'T MISS: Scientists are teaching robots to avoid children

Join the conversation about this story »

NOW WATCH: Stephen Hawking warned us about contacting aliens, but this astronomer says it's 'too late'


19 rumors we've heard about Samsung's Galaxy S8, one of the biggest smartphones of 2017

$
0
0

samsung galaxy s8 invitation

Samsung issued press invites to its Unpacked event, which will take place on March 29. 

Samsung's Unpacked events are when the company usually announces its new Galaxy smartphones.

Last month, we had 17 rumors, and we've updated this month's list with two new rumors and a few new updates to existing rumors, too.

Check out what we've seen and heard about Samsung's upcoming Galaxy S8:

SEE ALSO: Forget the iPhone 7 — here are 9 reasons the 2017 iPhone will blow everyone away

This is allegedly the Galaxy S8.

Prolific gadget leaker Evan Blass revealed an apparent press photo of the Galaxy S8.



The display will have rounded corners instead of sharp corners.

Two YouTube videos allegedly show Samsung's upcoming Galaxy S8 flagship smartphone with rounded corners instead of the sharp corners we've seen on previous Galaxy S smartphones.

The recently announced LG G6 has similarly rounded corners. LG claims they help make the screen more durable against cracks, and they match the rounded corners of the phone's design. 



Both Galaxy S8 models might have curved screens, but some rumors say there will be a flat-screened model.

Sources to the Korea Herald claim both Galaxy S8 models will have curved screens, and there won't be a flat-screened model. 

However, this particular rumor is a mixed bag, as the reputable SamMobile claims Samsung will indeed release a flat-screened model.



See the rest of the story at Business Insider

Mark Cuban thinks the world's first trillionaire will work in artificial intelligence

$
0
0

Mark Cuban

If you want to become the world's first person to have a net worth with 12 zeros — or $1 trillion — Mark Cuban says artificial intelligence is your industry.

Cuban, a billionaire entrepreneur and owner of the Dallas Mavericks, spoke at the SXSW festival in Austin, Texas this past weekend, claiming AI held untold wealth for anyone clever enough to tap its true potential.

"I am telling you, the world's first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of," Cuban said.

The person closest to becoming a trillionaire, at least according to public financial records, is Bill Gates, although he is still about $915 billion short of the title. Some research suggests Gates will become the first trillionaire by the mid-2040s just by virtue of his accumulating wealth.

Cuban's prophecy at SXSW tracks with his past comments that liberal arts are the ticket to professional success. Rather than learn the ins and outs of finance, Cuban sees critical thinking and problem solving becoming far more valuable in the coming decades.

"I would not want to be an accountant right now. I would rather be a philosophy major," he said. However, he did emphasize at the conference the value of studying computer science. "Whatever you are studying right now if you are not getting up to speed on deep learning, neural networks, etc., you lose."

Other tech entrepreneurs have said trillionaires are inevitable. Sam Altman, president of Silicon Valley's largest startup accelerator, Y Combinator, told Business Insider in July 2016 that despite how unfair it might seem, "we need to be ready for a world with trillionaires in it."

Bob Lord, an inequality analyst and tax attorney, believes the rapid acceleration of wealth will happen sometime in the next 25 to 30 years — and will most likely involving technology that doesn't exist yet.

"Someone is going to create something that no one has conceived of before," he told Business Insider.

Cuban concluded his talk at SXSW by encouraging the audience members to think about how the labor market could accommodate all the lost jobs that such AI research could produce.

For his own part, Cuban hasn't jumped on the bandwagon for universal basic income — one of the more popular solutions for a growing AI sector, in which wealth is redistributed to give everyone a salary regardless of work status.

Instead, he has emphasized the need for more efficient government as well as job-creating social support services like AmeriCorps.

SEE ALSO: Bill Gates could be the world's first trillionaire by 2042

Join the conversation about this story »

NOW WATCH: Barbara Corcoran shares 3 things she's learned from working alongside Mark Cuban on 'Shark Tank'

Artificial intelligence isn’t science fiction – it’s a business fact

$
0
0

artificial intelligence bundleArtificial intelligence goes by many names—machine learning, augmented intelligence, neural networks and more.

In fact, over a dozen terms are used to describe how machines are being used by organizations of all types to emulate human thinking, reasoning and decision-making.

Artificial intelligence is simply machines emulating human thinking, reasoning, decision-making.   Far from being the stuff of science fiction, AI is here now and creating exciting new opportunities for forward-thinking enterprises in every industry.   

If you want the real story about how AI may affect your organization—and your career—we have good news.   

In one simple step, you can acquire the critical information you need to not only understand artificial intelligence as never before, and also see how you can use  artificial intelligence to your benefit .   

It’s a shortcut that can revolutionize your business, leapfrog your competitors and grow your company’s bottom line: The eMarketer Artificial Intelligence Bundle.

The acknowledged leader in the field of digital economy research, eMarketer, is for the first time making two of their most important reports on artificial intelligence available to non-members like you.

With one simple step, you can acquire the most comprehensive, wide-ranging and valuable research on AI in business.

Only The eMarketer Artificial Intelligence Bundle gives you an in-depth look at how AI is being used today and where it’s headed tomorrow.    

Simply put, all the key players in your organization need to understand the promise of AI or risk being left behind by the competition:      

  • Marketers and advertisers can exponentially increase the effectiveness of their marketing campaigns and ad buys through AI’s ability to find higher-qualified prospects and deliver the right messages to them at the right time.
  • Customer service managers can use new call center algorithms and chatbots to satisfy customers more quickly and more efficiently.
  • Media and content providers can use AI to not only provide a better website experience through smart recommender systems and search technologies, they can create automated content that sounds like it was written by a human.
  • IT managers, of course, will need to know how to evaluate, budget for and maintain AI systems that may be completely unlike anything they’ve experienced before.    

Whether you’re already employing AI techniques or just getting off the ground, this is the kind of analysis you need to make the most of your AI initiatives.    

A One-Of-A-Kind Resource

Just a few minutes with this special research collection will reveal why it is superior to any similar product:

  • Breadth and depth of research: The reports in The eMarketer Artificial Intelligence Bundle have scores of insights, revelations and facts.  Because their researchersspecialize in gathering research from as many sources as possible—academic institutions, government data, industry associations, online media platforms, audience measurement firms, and other media researchers and consultants—you get the most important big-picture insights with the granular detail you need to use it to its fullest potential.    
  • Specific companies and technologies: Each report not only looks at the broader trends within AI, they name the specific companies and products that are on the leading edge of applying AI to thorny business problems.  
  • Expert analysis: The report authors have decades of experience in researching and analyzing all aspects of the digital world, from advertising, marketing and social media to technology, apps and demographics. They know how to separate the noise from the valuable insights and never bog you down in trivia or ignore the obvious.    
  • Informed forecasts: The authors don’t just report what’s working today. They conduct scores of interviews with executives, advertisers, media buyers and marketers to not just confirm the raw data, but also forecast where growth opportunities are in the years to come.

Not Available Anywhere Else

Typically, these exclusive reports are only available to a select number of eMarketer subscribers. But for the first time, eMarketer is making The eMarketer Artificial Intelligence Bundle available for purchase, and this is the only way you can get it!

But that's not all! Order today and you'll save more than 50% off the price you’d pay if each report in the bundle were sold separately. Click here to purchase the bundle.

Join the conversation about this story »

Uber has hired a Cambridge AI guru as its chief scientist

$
0
0

Zoubin Ghahramani

Uber has appointed Cambridge University academic Zoubin Ghahramani as its chief scientist.

Ghahramani will be based out of Uber's head office in San Francisco and oversee Uber's AI Labs, the company's machine learning and AI research unit.

"I have realised what a fantastic place Uber is for machine learning and AI researchers," said Ghahramani in a blog post on Uber's website. "There are a huge number of opportunities for both near-term high-impact research and longer term challenges to work on."

The taxi-hailing firm, which has been in the spotlight this month for all the wrong reasons, announced the appointment a week after Gary Marcus, head of Uber AI Labs, left the company. Several other executives and managers have also left Uber in recent weeks.

The departures come after a series of allegations — including claims of sexual harassment and sexism— were made against managers at the company.

They also come after a video emerged of CEO Travis Kalanick arguing with an Uber driver who complained about low pay. After the video was published, Kalanick issued a statement saying: "I must fundamentally change as a leader and grow up."

Jeff Holden, chief product officer at Uber, wrote in a blog post on Uber's website that Ghahramani is a "natural" for the chief scientist role. "Zoubin is among the most influential AI/ML researchers in the world," he said.

Ghahramani, who has published over 250 academic papers, is a professor of information engineering at the University of Cambridge, where he oversees a team of roughly 30 researchers.

Explaining why he decided to join Uber, which is developing its own self-driving cars, Ghahramani wrote:

"Uber is one of the world’s fastest growing and innovative technology companies, and its mission to 'make transportation as reliable as running water, everywhere, for everyone' will have a tremendous positive impact on society. I'm a great believer in the potential for technology to improve lives around the world, and I’m really excited about helping Uber in this mission. I have realised what a fantastic place Uber is for machine learning and AI researchers: there are a huge number of opportunities for both near-term high-impact research and longer term challenges to work on; the resources both in terms of data and computation are plentiful; and there are many talented and brilliant colleagues to work with.

"Artificial intelligence and machine learning are absolutely central to Uber's mission. Uber's opportunities are unique among major technology companies, because they center around the real physical world, which is complex and difficult to predict. We have to navigate around the real world, develop perception and action systems for our self-driving cars, and understand, predict, and make more efficient the experience for our riders and drivers. At a larger scale, we are trying to model and optimize entire cities, and reimagine the future of transportation through, for example, urban VTOL aviation. The probabilistic ML approaches I work on are clearly useful for this, but we have assembled research talent across a much wider range of ML and AI approaches including deep learning, reinforcement learning, and optimization, as well as problem domains such as language and robotics. We are continuing to recruit across all these areas and more, for both talented researchers and engineers."

Join the conversation about this story »

NOW WATCH: A mathematician gave us the easiest explanation of pi and why it’s so important

I learned how to apply makeup using a futuristic new feature on Sephora's app — here's what happened

$
0
0

makeup

Testing out a new makeup technique can be messy, time-consuming, and expensive. 

But beauty brand Sephora is trying to alleviate the struggle with a new feature on its app called Virtual Artist, which uses artificial intelligence to virtually apply makeup, teach you new makeup techniques, and show you how various looks would appear on your face. 

It wouldn't be Sephora without some shopping involved, however. If you find a look you like, the app lets you know which products were used — at that point, you can then add them to your shopping basket and buy through the app. 

I tested out the Virtual Artist to see how clever its artificial intelligence actually is. Several selfies later, here's what I found. 

SEE ALSO: This iPhone 7 case from Louis Vuitton costs a whopping $5,000 — take a look

First, you'll need to download the Sephora app. To access the Virtual Artist feature, click on the menu button and scroll down until you see it under the "Get Inspired" tab.



When you open up the feature, you'll have three options to play around with. I decided to test out the product try-on feature first.



You'll be presented with three different options: lips, lash, and shadow. It's probably best to try it without any makeup on to get the full effect, but it will still work if you have some eye makeup on like I did.



See the rest of the story at Business Insider

Here's how Bill Gates' plan to tax robots could actually happen

$
0
0

bill gates

Bill Gates has stated in an interview that robots who take human jobs should pay taxes. This has some obvious attractions.

Not only, as Gates says, will we be able to spend the money to finance jobs for which humans are particularly suited, such as caring for children or the elderly, but robots are also unlikely to complain about tax levels, they don't use services financed by tax revenue such as education or the health services and they are most unlikely to salt away income and assets in a tax haven.

What's not to like?

Well, actually, you can't tax robots any more than you can tax any other inanimate object — but Bill Gates' suggestion does address some of today's most important tax issues. What proportion of its tax revenue should a state raise from each of the three main tax bases; capital, labor, and expenditure? And how can a state counteract tax avoidance by large companies and wealthy individuals?

Taxing robots would, in reality, be a tax on the capital employed by businesses in using them and might help to redress the long-term shift away from taxing capital. In 1981, the rate of corporation tax was 52%, although generous relief meant that the tax base was relatively narrow. This has now fallen to 20% and further reductions to 17% are planned by 2020.

By contrast, the principal expenditure tax, VAT, was originally set at 8% in 1973, but rose to 15% in 1979 and is now 20%. This means that individuals are contributing a larger proportion of tax revenue than previously through taxes on salaries and expenditure and businesses are contributing less through taxes on their profits — even though they make use of the UK's transport, financial, and legal infrastructure and benefit from the education and healthcare provided to their employees.

Fairer taxation

One of the justifications for not taxing capital is that companies do not bear the economic cost of taxation through lower returns to their shareholders, but pass it on to labor and consumers through lower wages and/or higher prices. This argument is contentious.

Edward Kleinbard, Professor of Law and Business at the University of Southern California, observes that "unseemly scuffles" can ensue when this topic is discussed at academic conferences. But whereas academics can have a good punch-up, agree to differ and then retire to the bar for drinks, Kleinbard cites several US government departments and committees, who must make policy based on their assumptions, estimate that capital bears between 75% and 95% of the economic cost of corporation tax, meaning that they can only shift a small part of it onto labor and consumers.

Furthermore, when US Uncut in April 2011 issued a hoax press release purporting to be from GE, stating that it would hand back a $3.2 billion tax refund as "contrition for past abuses," the company's market capitalization very briefly fell by around $3.5 billion.

On this evidence, shareholders behave as though they believe that capital bears the economic cost of corporation tax through reduced dividends. If they did not believe this, why would they have cared? On the evidence of Kleinbard and the hoax, the theory that capital does not bear the economic cost of tax would therefore appear to be a rationalizing discourse put forward by those who benefit from lower taxes on capital.

A Baxter robot of Rethink Robotics picks up a business card as it performs during a display at the World Economic Forum (WEF), in China's port city Dalian, Liaoning province, China, September 9, 2015. REUTERS/Jason LeeTaxing robots might also help to counteract tax avoidance, because the tax would be calculated by taxing a notional salary paid to the robot, and the company would be allowed to deduct this notional payment for the purpose of corporation tax.

Tax avoidance by large multinationals typically operates by transferring taxable profits from where they economically arise to tax havens, where their presence is often no more than a brass plate on a wall and a mailbox, or even, in the case of Apple, to a company located in some mid-Atlantic limbo, whose profits were therefore not taxable in any tax jurisdiction. These companies pay the same rate of corporation tax as all other companies on their profits remaining in the UK or Ireland, but ensure that these are only a small fraction of their total profits.

In contrast, the robot tax, just like salaries, would be calculated on an amount notionally payable out of revenue and would be payable in the tax jurisdiction in which the robot was located. This would be where the revenue was economically generated and this location would be determined by economics rather than tax considerations.

Is it even possible?

Finally, Bill Gates puts forward the currently unfashionable view that governments have an important role to play in combating inequality. For the past 35 or 40 years the dominant view has been that this would be achieved through economic growth and should therefore be left to the private sector and the markets.

But Gates says that combating inequality will require large amounts of excess labor to be used to help those on lower incomes, that robots will free up this labor and that the impetus for the necessary changes must come from governments because business cannot or will not do this of their own accord.

Furthermore, he says that taxing robots will not discourage innovation. People are naturally anxious about the effects of such technology, but taxation is a better way of allaying these fears than the alternative of banning aspects of it.

Could taxation of robots ever happen? Certainly it could, but the $64,000 question is whether there is the political will to do it. It would take a major paradigm shift in our attitude towards taxation to see it as a possible force for good, rather than simply a dead weight and burden. However, in the 1960s and 1970s today's attitude towards taxation would have been equally inconceivable. Never say never.

Malcolm James is a Senior Lecturer in Accounting & Taxation at Cardiff Metropolitan University.

This article was originally published on The Conversation. Read the original article.

SEE ALSO: Bill Gates says robots that take your job should pay taxes

Join the conversation about this story »

NOW WATCH: BILL GATES: A deadly epidemic is a real possibility and we are not prepared

An ex-Google Brain AI expert who joined Chinese tech giant Baidu as chief scientist is now leaving the firm (BIDU)

$
0
0

Andrew Ng Baidu

Artificial intelligence (AI) expert Andrew Ng has announced that he is resigning from his role as chief scientist at Chinese search engine giant Baidu after nearly three years in the job.

Ng, who announced his departure in a blog post on Wednesday, does not currently have another job lined up, although he's likely to be in high demand.

"I will be resigning from Baidu, where I have been leading the company's AI Group," wrote Ng in the Medium blog post.

"After Baidu, I am excited to continue working toward the AI transformation of our society and the use of AI to make life better for everyone."

Along with the likes of DeepMind, Google, Microsoft and Facebook, Baidu is often seen as one of the world leaders when it comes to AI research. The company's CEO Robin Li described AI as Baidu's "key strategic focus for the next decate" and Baidu managed to poach Microsoft exec Qi Lu from Satya Nadella.

As one of the highest-regarded minds in the industry, Ng is likely a big loss for Baidu and stock in the company fell 2.7%, or roughly $1.5 billion after Ng announced his exit.

Baidu stock

There is currently a war for AI talent taking place around the world, with tech giants willing to pay hundreds of thousands of dollars for people who can help them shape their next generation of products. AI is being embedded into an increasing range of products, from Siri in the Apple iPhone to the navigation systems being integrated in Uber's self driving cars.

Prior to joining Baidu in May 2014, Ng founded and led Google Brain, which develops massive-scale deep learning algorithms for the search giant. Among other things, Google Brain developed a massive neural network that learned from unlabelled YouTube videos to detect cats.

keanu cat

Here is Andrew Ng's full blog post.

Opening a new chapter of my work in AI

Dear Friends,

I will be resigning from Baidu, where I have been leading the company's AI Group. Baidu's AI is incredibly strong, and the team is stacked up and down with talent; I am confident AI at Baidu will continue to flourish. After Baidu, I am excited to continue working toward the AI transformation of our society and the use of AI to make life better for everyone.

Artificial Intelligence at Baidu

I joined Baidu in 2014 to work on AI. Since then, Baidu's AI group has grown to roughly 1,300 people, which includes the 300-person Baidu Research. Our AI software is used every day by hundreds of millions of people. We have had tremendous revenue and product impact, through the many dozens of AI projects that support our existing businesses in search, advertising, maps, take-out delivery, voice search, security, consumer finance and many more.

We have also used AI to develop new lines of business. My team birthed one new business unit per year each of the last two years: autonomous driving and the DuerOS Conversational Computing platform. We are also incubating additional promising technologies, such as face-recognition (used in turnstiles that open automatically when an authorized person approaches), Melody (an AI-powered conversational bot for healthcare) and several more. As the principal architect of Baidu's AI strategy, I am proud to have led the incredible rise of AI within the company.

Baidu is now one of the few companies with world-class expertise in every major AI area: speech, NLP, computer vision, machine learning, knowledge graph. It's been an honor to lead the AI Group and work with Baidu's remarkable team, from the executive leadership to the brilliant engineers, scientists, product managers and others. Robin Li was the first large company CEO to clearly see the value of deep learning, and remains globally one of the best AI CEOs. COO Lu Qi is a seasoned business executive with significant experience in AI; with his leadership, AI at Baidu will flourish. Wang Haifeng, the new head of the AI Group, is a fantastic researcher and technology leader; his leadership firmly positions the team for future greatness. Lin Yuanqing, our newly appointed head of Baidu Research, is a brilliant technology and business leader, who is creating both great AI technologies and great business results. Under their capable leadership, and that of other AI stars at Baidu such as Adam Coates, Jing Kun, Li Ping, Xu Wei, and Zhu Kaihua, AI at Baidu will continue to thrive, and I will be cheering their progress.

I've also been privileged to learn from both the U.S. and Chinese AI communities — both of which are powerhouses. The U.S. is very good at inventing new technology ideas. China is very good at inventing and quickly shipping AI products. I'm happy also to have had an opportunity to contribute to the rise of AI in both China and the U.S.

AI is the new electricity

Just as electricity transformed many industries roughly 100 years ago, AI will also now change nearly every major industry — healthcare, transportation, entertainment, manufacturing — enriching the lives of countless people. I am more excited than ever about where AI can take us.

As the founding lead of the Google Brain project, and more recently through my role at Baidu, I have played a role in the transformation of two leading technology companies into "AI companies." But AI's potential is far bigger than its impact on technology companies.

I will continue my work to shepherd in this important societal change. In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research. I want all of us to have self-driving cars; conversational computers that we can talk to naturally; and healthcare robots that understand what ails us. The industrial revolution freed humanity from much repetitive physical drudgery; I now want AI to free humanity from repetitive mental drudgery, such as driving in traffic. This work cannot be done by any single company — it will be done by the global AI community of researchers and engineers. My Machine Learning MOOC on Coursera helped many people enter AI. In addition to working on AI myself, I will also explore new ways to support all of you in the global AI community, so that we can all work together to bring this AI-powered society to fruition.

I am more optimistic than ever about the fantastic future we will build with AI. Lets keep working hard to get AI to help everyone!

Andrew Ng

Join the conversation about this story »

NOW WATCH: Tesla will begin selling its Solar Roof this year — here's everything you need to know


Baidu's value took a $1.5 billion plunge after its chief scientist announced he's leaving (BIDU)

$
0
0

Andrew Ng Baidu

Stock in Chinese tech giant Baidu was down 2.7% on Wednesday morning after chief scientist Andrew Ng announced he's leaving the company.

The company's share price on New York's Nasdaq stock market fell from $177 (£142) a share to $170 (£136) a share.

Tom Weheimer, a researcher at technology investment firm, pointed out on Twitter that the stock price fall equates to approximately $1.5 billion (£1.2 billion).

Baidu's market cap was $59.3 billion (£38 billion) at the start of Wednesday but it was on course to be significantly lower by the end of the day's trading.

It's possible that there are other factors at play here, but Ng's exit is no doubt a big loss for Baidu given he's one of the brightest minds in the industry.

"I will be resigning from Baidu, where I have been leading the company's AI Group," wrote Ng in a Medium blog post. "Baidu's AI is incredibly strong, and the team is stacked up and down with talent; I am confident AI at Baidu will continue to flourish. After Baidu, I am excited to continue working toward the AI transformation of our society and the use of AI to make life better for everyone."

Prior to joining Baidu in May 2014, Ng founded and led Google Brain, which develops massive-scale deep learning algorithms for the search giant. Among other things, Google Brain developed a massive neural network that learned from unlabelled YouTube videos to detect cats.

Join the conversation about this story »

NOW WATCH: Your neighbor's WiFi is ruining yours — here's how to fix it

Software that can understand images, sounds, and language is helping people with disabilities

$
0
0

closed captioning weatherman

FCC rules require TV stations to provide closed captions that convey speech, sound effects, and audience reactions such as laughter to deaf and hard of hearing viewers. YouTube isn’t subject to those rules, but thanks to Google’s machine-learning technology, it now offers similar assistance.

YouTube has used speech-to-text software to automatically caption speech in videos since 2009 (they are used 15 million times a day). Today it rolled out algorithms that indicate applause, laughter, and music in captions. More sounds could follow, since the underlying software can also identify noises like sighs, barks, and knocks.

The company says user tests indicate that the feature significantly improves the experience of the deaf and hard of hearing (and anyone who needs to keep the volume down). “Machine learning is giving people like me that need accommodation in some situations the same independence as others,” says Liat Kaver, a product manager at YouTube who is deaf.

Indeed, YouTube’s project is one of a variety that are creating new accessibility tools by building on progress in the power and practicability of machine learning. The computing industry has been driven to advance software that can interpret images, text, or sound primarily by the prospect of profits in areas such as ads, search, or cloud computing. But software with some ability to understand the world has many uses.

Last year, Facebook launched a feature that uses the company’s research on image recognition to create text descriptions of images from a person’s friends, for example.

Researchers at IBM are using language-processing software developed under the company’s Watson project to make a tool called Content Clarifier to help people with cognitive or intellectual disabilities such as autism or dementia. It can replace figures of speech such as “raining cats and dogs” with plainer terms, and trim or break up lengthy sentences with multiple clauses and indirect language.

The University of Massachusetts, Boston, is helping to test how the system could help people with reading or cognitive disabilities. Will Scott, an IBM researcher who worked on the project, says the company is talking with an organization that helps autistic high schoolers transition to college life about testing the system as a way of helping people understand administrative and educational documents. “The computing power and algorithms and cloud services like Watson weren’t previously available to perform these kinds of things,” he says.

Screen Shot 2017 03 23 at 2.03.40 PM

Ineke Schuurman, a researcher at the University of Leuven in Belgium, says inventing new kinds of accessibility tools is important to prevent some people from being left behind as society relies more and more on communication through computers and mobile devices.

She is one of the leaders of an EU project testing its own text simplification software for people with intellectual disabilities. The technology has been built into apps that integrate with Gmail and social networks such as Facebook. “People with intellectual disabilities, or any disability, want to do what their friends and sisters and brothers do—use smartphones, tablets, and social networking,” says Schuurman.

Austin Lubetkin, who has autism spectrum disorder, has worked with Florida nonprofit Artists with Autism to help others on the spectrum become more independent. He welcomes research like IBM’s but says it will be a challenge to ensure that such tools perform reliably. A machine-learning algorithm recommending a movie you don’t care for is one thing; an error that causes you to misunderstand a friend is another.

Still, Lubetkin, who is working at a startup while pursuing a college degree, is optimistic that machine learning will open up many new opportunities for people with disabilities in the next few years. He recently drew on image-recognition technology from startup Clarifai to prototype a navigation app that offers directions in the form of landmarks, inspired by his own struggles to interpret the text and diagram information from conventional apps while driving. “Honestly, AI can level the playing field,” says Lubetkin.

SEE ALSO: Machine learning shows the potential of cost-cutting benefits

Join the conversation about this story »

NOW WATCH: Your neighbor's WiFi is ruining yours — here's how to fix it

Treasury Secretary Mnuchin says job-stealing AI is 'so far in the future' that it's 'not even on my radar screen' — here's why he's wrong

$
0
0

Treasury Secretary Steve Mnuchin says artificial intelligence is "so far in the future" that it's "not even on my radar screen." He says we won't have to worry about how it affects the workforce for "50 or 100 more years."

Mnuchin made his remarks to Mike Allen of Axios, in a moment caught on video that you can watch here:

In fairness to Mnuchin, the question was specifically about artificial intelligence, not robots. It's a fine distinction, but an important one — while robots that can perform repetitive tasks have been in wide industrial use for decades now, artificial intelligence is a class of software that can "learn" and let machines do more sophisticated jobs.

In fact, in that video, neither Mnuchin nor Allen say the word "robot." And while it's undeniable that factories and even some entry-level jobs like fast-food cashiers are being replaced by robots of varying kinds, we're not quite at the point where artificial intelligence is "smart" enough to replace humans at wide scales.

But even if you account for that distinction, there's lots of evidence to suggest that Mnuchin is way off base.

The AI revolution is already here

In December 2016, the Obama Administration issued a report indicating that as many as 47% of all American jobs could be at risk from artificial intelligence in the next two decades. As artificial intelligence improves, we're going to see ever-smarter machines do everything from drive trucks to scanning warehouses for inventory.

As AI software gets better, it can perform more delicate tasks, too, including coffee-making and staple-removing. Even those jobs on the factory assembly line previously held by humans can now be performed by robots, with Foxconn recently replacing 60,000 workers with machines

Top thinkers, including professor Stephen Hawking, have theorized that the disruption could be enough to completely decimate the middle class. Hawking has said that AI-powered machines will take many different kinds of jobs, with "only the most caring, creative or supervisory roles remaining."

Which is why Mnuchin's remarks have stunned many in the tech community, with Mark Cuban tweeting only "Wow" in response. 

SEE ALSO: Stephen Hawking: Automation and AI is going to decimate middle class jobs

Join the conversation about this story »

NOW WATCH: CaliBurger plans on using these burger-flipping robots

Technical issues are forcing Amazon to delay the public launch of its cashier-less grocery store (AMZN)

$
0
0

Amazon

Technical difficulties have delayed the public opening of Amazon's futuristic new grocery store. 

The store, called Amazon Go, uses machine learning and cameras to detect what's in your cart and automatically charge your Amazon account so you can leave the store without ever taking out your wallet.

But Amazon Go isn't quite ready for primetime, according to The Wall Street Journal's Laura Stevens. The technology is having trouble keeping track of more than 20 people at a time and struggles to track an item that has been moved from its place on the shelf. Right now, things only run smoothly when there are just a few customers in the store or if they're moving slowly, the Journal reports. 

Amazon has been testing the store at its campus in Seattle, with employees serving as beta testers. Amazon officially announced the store last December with a video showing how it will eventually work and said at the time that the store would be open to the public in early 2017.

Amazon did not immediately respond to a request for comment. 

SEE ALSO: This is Amazon's grocery store of the future: no cashiers, no registers, and no lines

Join the conversation about this story »

NOW WATCH: Forget the iPhone 7 — here are 13 reasons the next iPhone will blow everyone away

The 21 most believable rumors about Samsung's Galaxy S8, one of the biggest smartphones of 2017

$
0
0

samsung galaxy s8 invitation

Samsung's big Galaxy S8 event on March 29th is fast approaching, and we're excited to see how the company will follow up on one of the best smartphones of 2016, the Galaxy S7.

We've heard a bunch of rumors surrounding the Galaxy S8 by this point, so we went ahead and rounded up the most credible leaks and rumors in one place, cutting out the rumors that seem like a long shot.

Check out what we think Samsung has in store for us with the Galaxy S8:  

 

SEE ALSO: Photos of the Galaxy S8 have leaked that show the phone in every color and angle

So what does it look like? This is allegedly the Galaxy S8.

Prolific gadget leaker Evan Blass revealed apparent press photos of the Galaxy S8, showing the device in different angles and colors.



It'll have narrower borders than previous Galaxy S phones.

According to Bloomberg, the Galaxy S8 phones will have narrower borders than the Galaxy S7 phones. A slew of photo and video leaks, like Blass' photo leak in the first rumor of this list, appear to support this rumor, too.



The display will have rounded corners instead of sharp corners.

Two YouTube videos allegedly show Samsung's upcoming Galaxy S8 flagship smartphone with rounded corners instead of the sharp corners we've seen on previous Galaxy S smartphones.

Samsung's own press invitation for the Galaxy S8 launch event alludes to the rounded corners, too.

galaxy s8 invitation 800 wide rounded corners

The recently announced LG G6 has similarly rounded corners. LG claims they help make the screen more durable against cracks, and they match the rounded corners of the phone's design. 



See the rest of the story at Business Insider
Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>