Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Why emotional intelligence is key in the digital age

$
0
0

Banksy doctor heart

Much has been written about the relationship between a happy, positive workplace and an effective, productive workforce.

But the definition of happiness can be misunderstood – often it is seen as the presence of positive emotions and the absence of negative ones, which can lead to work cultures that pressure people into faking positive emotions. Research has shown this “faking” can result in long-term physical and emotional illness.

Associating the state of being happy merely with being cheerful all the time creates another challenge as, in the case of academic institutions for example, happiness tends be classified as less serious, superficial and lightweight.

This results in universities avoiding the conversation on developing “happy” graduates and adopting a “happiness agenda” for the holistic development of their students.

At a time when depression and suicide are on the rise – currently 300 million people worldwide are suffering from depression – this is disturbing. A recent report by the World Health Organization predicted that if nothing is done, by 2030 depression will be the number one illness in the world.

Three steps to happiness

Happiness is not just about developing positive emotions, it has two other constituent parts: purpose and resilience. Having a clear and meaningful purpose is a key element in sustaining long-term happiness.

And because negative emotions are an integral part of life, developing resilience is the third highly essential component of happiness, as it enables us to deal effectively with negative emotions when they arise.

work

Employers who are serious about achieving effectiveness and productivity through a happy workforce need to ensure workers are given the opportunity to do engaging, meaningful and purpose-driven work, are able to develop good relationships and experience a sense of achievement.

Many indicators suggest that jobs of the future will require much more emotional intelligence to complement the sophisticated machines we work with.

Academic institutions need to seriously consider playing a role in developing students’ emotional intelligence and well-being to ensure that universities remain relevant in a world where the fourth industrial revolution demands the integration of physical, cyber and biological systems and the automation of an increasing number of jobs.

With the unprecedented levels of complexity and change societies are dealing with, it is crucial to explore how education systems can evolve to help young people develop self-awareness and social awareness if they are to thrive and achieve their full potential once they enter the workplace.

A space for human connection

Humans bring three dimensions to the job market: physical, cognitive and emotional.

Machines have surpassed us in both the physical dimension (less and less manual work is necessary) and the cognitive dimension (Artificial Intelligence is increasingly able to surpass humans in tasks such as chess and medical diagnosis). This leaves the emotional domain where humans still have the upper hand. As more and more jobs are automated, the nature of the value that humans will add will evolve to focus around creativity, connectivity with others and self-fulfillment.

robot child humanoid boy

American psychologist Daniel Goleman defined the four domains of emotional intelligence as: self-awareness, social awareness, self-management and relationship management.

In 2013, I developed an online course on emotional intelligence which was taken by more than 6,000 students from 150 different countries. The course introduced multiple exercises aimed at developing Daniel Goleman’s four domains.

Students performed two daily exercises: “brain rewiring” which involved stating five things they were grateful for, and “my emotions today” where they articulated their feelings by sharing them online with others participants on the course. These exercises of gratitude and emotional awareness can help create the foundational habits for emotional intelligence.

Students were also introduced to the practice of meditation and were supported through the development of SMART (Specific, Measurable, Ambitious, Relevant and Timely) goals, a mission statement and a personal vision statement. Some students reported personal triumphs such as being able to climb a mountain, control a stammer, start a business and even getting married and overcoming suicidal thoughts.

More work needs to be done to establish the most effective ways of developing emotional intelligence in young people across all walks of society. But if we are to take on the demands, complexities and shifting sands of the digital age, we will need happy, fulfilled, resilient people to embrace it; our universities have a part to play in teaching these essential skills.

As do workplaces, where happy, fulfilled employees can mean increased productivity and turnover. People pretending to be happy in the workplace reaps no benefit for anyone.

SEE ALSO: Millennials are actually happy in their jobs — workers over 35, not so much

Join the conversation about this story »

NOW WATCH: 6 airline industry secrets that will help you fly like a pro


A Stanford researcher is pioneering a dramatic shift in how we treat depression — and Google Brain's cofounder has joined the effort

$
0
0

alone sad depressed sea

  • Several tech startups have entered the mental health space in recent years, but few have made a real impact.
  • Woebot is an artificially intelligent chatbot designed by Stanford researchers. It uses one of the most heavily researched clinical approaches to treating depression.
  • This week, the company announced that a co-founder of the Google Brain project will serve as Woebot's new chair.

Depression is the leading cause of disability worldwide, and it can kill. Yet scientists know surprisingly little about it.

They do know, however, that talking seems to help — especially under the guidance of a licensed mental health professional. But therapy is expensive, inconvenient, and often hard to approach. A recent estimate suggests that of the roughly one in five Americans who have a mental illness, close to two-thirds have gone at least a year without treatment.

Several Silicon Valley-style approaches to the problem have emerged: There are apps that replace the traditional psychiatry office with texting, and chat rooms where you can discuss your problems anonymously online.

The newest of these tech-based treatments is Woebot, an artificially intelligent chatbot designed using cognitive-behavioral therapy, or CBT — one of the most heavily researched clinical approaches to treating depression.

On Wednesday, the company announced that Andrew Ng, a co-founder of the Google Brain project, will serve as Woebot's new chairman. The Google Brain research team focuses on artificial intelligence with an emphasis on a machine learning process known as deep learning. According to VentureBeat, Ng believes machine learning could greatly benefit the mental health space, which what led him to Woebot.

Woebot's designer, Alison Darcy, is a clinical psychologist at Stanford. She tested a version of the technology on a small sample of real people with depression and anxiety long before launching it.

"The data blew us away," Darcy told Business Insider. "We were like, this is it."

The results of the trial were published recently in the journal JMIR Mental Health. For the test, Darcy recruited 70 students who said they experienced symptoms of depression and anxiety and split them into two groups. One group spent two weeks chatting with Woebot; the other was directed to a National Institute of Mental Health e-book about depression.

Over two weeks, people in the Woebot group reported that they chatted with the bot almost every day and saw a significant reduction in their depressive symptoms. That's a promising result for a type of treatment whose results have so far been tough to quantify — there isn't much research comparing bot therapy with traditional human therapy.

Several studies have suggested that the CBT approach lends itself to being administered online. A review of studies published recently in the journal World Psychiatry compared people who received CBT online with people who received it in person and found that the online setting was just as effective.

Dr. Ali Darcy Headshot 2One reason for this, according to Darcy, is that CBT focuses on discussing things that are happening in your life now as opposed to things that happened to you as a child. As a result, instead of talking to Woebot about your relationship with your mom, you might chat about a recent conflict at work or an argument you had with a friend.

"A premise of CBT is it's not the things that happen to us — it's how we react to them," Darcy said.

Woebot uses that methodology to point out areas where a person might be engaging in what's called negative self-talk, which can mean they see the environment around them in a distorted way and feel bad about it.

For example, if a friend forgot about your birthday, you might tell Woebot something like, "No one ever remembers me," or "I don't have any real friends." Woebot might respond by saying you're engaging in a type of negative self-talk called all-or-nothing thinking, which is a distortion of reality. In reality, you do have friends, and people do remember you. One of those friends simply forgot your birthday.

"Self-talk is a part of being human," Darcy said. "But the kinds of thoughts that we have actually map onto the kinds of emotions we're feeling."

Darcy is quick to point out that Woebot is not a replacement for traditional therapy, but an addition to the toolkit of approaches to mental health.

"I tend to not think of this as a better way to do therapy. I think of this as an alternative option," Darcy said. "What we haven't done a good job of in the field is give people an array of options. What about the people who aren't ready to talk to another person?"

DON'T MISS: I've been on antidepressants for a decade — here's the biggest misconception about them

SEE ALSO: Psychedelic drugs could tackle depression in a way that antidepressants can't

Join the conversation about this story »

NOW WATCH: I went on Beyoncé's 22-day diet — and I lost 15 pounds

The stock market's robot revolution is here

$
0
0

artificial intelligence robot

  • EquBot just launched the AI Powered Equity ETF (AIEQ), which uses IBM's Watson technology to construct a stock portfolio.
  • The fund has outperformed the S&P 500 so far, but a much longer trading period is needed to assess whether it can truly offer market-beating returns.

 

The long-awaited rise of the machines is here, at least in the stock market.

A new artificial intelligence-powered exchange-traded fund launched on October 18. Called the AI Powered Equity ETF (ticker: AIEQ), it uses IBM's Watson supercomputing technology to analyze more data than humanly possible, all in the pursuit of building the perfect portfolio of 30 to 70 stocks.

The ETF ranks investments based on their "probability of benefiting from current economic conditions, trends, and world- and company-specific events" and picks those with the best chance at outperformance, according to a recent release.

And the technology enables it to do that while constantly analyzing information for 6,000 US-listed companies. The top three positions as of Friday were CIT Group, Penumbra and Genworth Financial

The fund, which the release says is the first of its kind, was founded by EquBot. The company is a part of IBM's Global Entrepreneur program, and is offered to investors through a partnership with ETF Managers Group. EquBot initially sprouted from a discussion between the cofounders in an MBA classroom at UC Berkeley's Haas School of Business.

The ETF launch comes at a time when passive investment has never been hotter. The combined assets of US ETFs hit $3.1 trillion in August, increasing roughly $700 billion in a single year, according to Investment Company Institute data. And many of those strategies already employ computer-driven quantitative strategies.

So what sets AIEQ apart? Chida Khatua, CEO and co-founder of EquBot, argues that their technology is more advanced, which gives it a big advantage.

"As powerful as many algorithms underlying expensive quantitative hedge funds and other vehicles might be, unless they’re also built with AI and machine learning baked right in, mistakes can be propogated and opportunities for outperformance can be missed," he said in the October 18 release.

In three days of trading, the fund has risen 0.8%, double the S&P 500 over the same period. What's more, the ETF has averaged about 193,000 units traded per day, a strong showing for a fledgling fund. It had around $3.2 million in assets on Friday afternoon.

Of course, a much longer time frame will be needed to assess whether the ETF is actually able to translate its massive computing power into market-beating returns. But so far, so good.

SEE ALSO: If Trump is doing so horribly, why is the stock market doing so well?

Join the conversation about this story »

NOW WATCH: THE BOTTOM LINE: The 'Trump trade' is back and Ray Dalio breaks down the bitcoin bubble

Elon Musk’s artificial intelligence company created virtual robots that can sumo wrestle and play soccer

$
0
0

Elon Musk’s artificial intelligence company created virtual robots that can sumo wrestle and play soccer. Following is a transcript of the video. 

These AI robots are getting physical. They may look goofy but they're smarter than you think. OpenAI's bots can teach themselves how to sumo wrestle and play soccer. They learn using a "self-play" training process. The bots learn new skills in small increments by competing against copies of themselves. The winners go on to play other bots.

What exactly can they learn? Self-play discovers physical skills like tackling, ducking, faking, kicking, catching, and diving. The bots get "rewarded" for achieving small goals such as getting closer to a ball or getting closer to their opponent. By achieving goals, they get smarter.

OpenAI is a company with some big-time backers, including Elon Musk, Sam Altman, and Peter Thiel. The hope is to bring AI learning to real-world objects.

What can AI accomplish next?



Join the conversation about this story »

I research and develop AI — and it's never going to change the world if it's regulated

$
0
0

robot

  • Laws already exist that limit AI systems and consequences of their use.
  • For example, self-driving cars are held to current traffic laws and drones must obey FAA regulations.
  • But while there are some risks that come with AI, it would be better to teach AI ethics and morals, rather than regulate what they can and can't do.
  • AI could help law enforcement respond to human gunmen, and limit human exposure to dangerous materials like nuclear reactors.
  • However, further regulating AI could delay such innovations.


Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity — or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook's Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic.

As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I've seen how beneficial it can be. I've developed AI software that lets robots working in teams make individual decisions, as part of collective efforts to explore and solve problems. Researchers are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.

How is AI regulated now?

While the term "artificial intelligence" may conjure fantastical images of human-like robots, most people have encountered AI before. It helps us find similar products while shopping, offers movie and TV recommendations and helps us search for websites. It grades student writing, provides personalized tutoring and even recognizes objects carried through airport scanners.

In each case, the AI makes things easier for humans. For example, the AI software I developed could be used to plan and execute a search of a field for a plant or animal as part of a science experiment. But even as the AI frees people from doing this work, it is still basing its actions on human decisions and goals about where to search and what to look for.

In areas like these and many others, AI has the potential to do far more good than harm — if used properly. But I don't believe additional regulations are currently needed. There are already laws on the books of nations, states and towns governing civil and criminal liabilities for harmful actions. Our drones, for example, must obey FAA regulations, while the self-driving car AI must obey regular traffic laws to operate on public roadways.

Existing laws also cover what happens if a robot injures or kills a person, even if the injury is accidental and the robot's programmer or operator isn't criminally responsible. While lawmakers and regulators may need to refine responsibility for AI systems' actions as technology advances, creating regulations beyond those that already exist could prohibit or slow the development of capabilities that would be overwhelmingly beneficial.

Potential risks from artificial intelligence

It may seem reasonable to worry about researchers developing very advanced artificial intelligence systems that can operate entirely outside human control. A common thought experiment deals with a self-driving car forced to make a decision about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring the car's occupants and perhaps even those in another vehicle.

Musk and Hawking, among others, worry that hypercapable AI systems, no longer limited to a single set of tasks like controlling a self-driving car, might decide it doesn't need humans anymore. It might even look at human stewardship of the planet, the interpersonal conflicts, theft, fraud and frequent wars, and decide that the world would be better without people.

self driving car

Science fiction author Isaac Asimov tried to address this potential by proposing three laws limiting robot decision-making: Robots cannot injure humans or allow them "to come to harm." They must also obey humans — unless this would harm humans — and protect themselves, as long as this doesn't harm humans or ignore an order.

But Asimov himself knew the three laws were not enough. And they don't reflect the complexity of human values. What constitutes "harm" is an example: Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals' freedoms to make personal reproductive decisions?

We humans have already wrestled with these questions in our own, nonartificial intelligences. Researchers have proposed restrictions on human freedoms, including reducing reproduction, to control people's behavior, population growth and environmental damage. In general, society has decided against using those methods, even if their goals seem reasonable. Similarly, rather than regulating what AI systems can and can't do, in my view it would be better to teach them human ethics and values— like parents do with human children.

Artificial intelligence benefits

People already benefit from AI every day — but this is just the beginning. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must focus on preventing officers from being injured, but robots could step into harm's way, potentially changing the outcomes of cases like the recent shooting of an armed college student at Georgia Tech and an unarmed high school student in Austin.

Intelligent robots can help humans in other ways, too. They can perform repetitive tasks, like processing sensor data, where human boredom may cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, such as when decontaminating a nuclear reactor, working in areas humans can't go. In general, AI robots can provide humans with more time to pursue whatever they define as happiness by freeing them from having to do other work.

Achieving most of these benefits will require a lot more research and development. Regulations that make it more expensive to develop AIs or prevent certain uses may delay or forestall those efforts. This is particularly true for small businesses and individuals — key drivers of new technologies — who are not as well equipped to deal with regulation compliance as larger companies. In fact, the biggest beneficiary of AI regulation may be large companies that are used to dealing with it, because startups will have a harder time competing in a regulated environment.

The need for innovation

Humanity faced a similar set of issues in the early days of the internet. But the United States actively avoided regulating the internet to avoid stunting its early growth. Musk's PayPal and numerous other businesses helped build the modern online world while subject only to regular human-scale rules, like those preventing theft and fraud.

Artificial intelligence systems have the potential to change how humans do just about everything. Scientists, engineers, programmers and entrepreneurs need time to develop the technologies — and deliver their benefits. Their work should be free from concern that some AIs might be banned, and from the delays and costs associated with new AI-specific regulations.

SEE ALSO: Google's machine-learning software has learned to replicate itself

Join the conversation about this story »

NOW WATCH: There’s a live supervolcano underneath Yellowstone National Park — here’s what would happen if it erupted

Lexus is building an automated car with artificial intelligence

$
0
0

LS+ Concept

  • Lexus unveiled the LS+ Concept at the Tokyo Motor Show.
  • The company plans to fully automate the LS sedan using artificial intelligence.
  • The technology will automate merging into highway traffic, lane changes, overtaking and maintaining vehicle-to-vehicle distance.


TOKYO – Luxury car brand Lexus plans to fully automate its flagship LS sedan for highway driving conditions and will use artificial intelligence (AI) to help the car become completely automated with a decade.

The LS+ Concept was announced at the Tokyo Motor Show today and will feature what the Toyota luxury brand calls "Highway Teammate," which the company says will "enable it to drive by itself on specific car-only highways, such as from the on-ramp to the exit of expressways."

LS+ Concept

The technology will automate merging into highway traffic, lane keeping, speed adjustments, lane changes, overtaking and maintaining vehicle-to-vehicle distance.

Lexus says AI learning will help accelerate the shift to self-driving vehicles on regular roads during the first half of the 2020s.

LS+ Concept

The race towards self-driving cars is being led by Volvo, with Level 4 autonomous cars, which manage driving situations without human intervention, already on Swedish roads and trials currently underway in London and China.

Lexus says its focus is on eliminating traffic casualties and improving highway safety, as well as cutting driver fatigue and traffic congestion via the on-board system responses to traffic conditions.

The Japanese automaker has already included automated driving technologies into its safety management system and now says that it will use AI to learn from big data, including information on roads and surrounding areas and the LS+ will communicate with a data center to update software, allowing new functions to be added.

The LS+ Concept has also had a design makeover and will be longer, wider and lower than the upcoming LS 500 production car.

LS+ Concept

SEE ALSO: The new BMW 5-Series is boring — but it's also perfect

Join the conversation about this story »

NOW WATCH: Legendary economist Gary Shilling explains how you can beat the market

Mastercard unveils AR shopping experience

$
0
0

retail investment in ar ai vr

This story was delivered to BI Intelligence "Payments Briefing" subscribers. To learn more and subscribe, please click here.

Mastercard partnered with mobile chip designing giant Qualcomm and wearable technology manufacturer Osterhout Design Group (ODG) to develop an augmented reality (AR) shopping experience.

The AR experience incorporates ODG's R-9 AR smartglasses, Identity Check Mobile, iris authentication, and mobile payments via Masterpass. 

The smartglasses will display digital descriptions of a physical product — such as its price — that a consumer looks at while shopping with them on. Consumers can then purchase items using Masterpass and verify their identity through an iris scan. The AR demo — which used clothes from Saks Fifth Avenue — was showcased at this week’s Money20/20 conference in Las Vegas. This showcase follows Mastercard’s recent partnership with Swarovski for a virtual reality (VR) shopping app.

As AR and VR technologies popularize, major opportunities for payments players will continue to open up.    

  • AR technology is popularizing. By 2022, global AR glasses shipments are projectedto hit 23 million units, up from 150,000 unit shipments in 2016. As consumer interest in AR technology increases, more opportunities to integrate payments — through shopping experiences like Mastercard’s, for example — will open up, allowing payment players to enter the space. Integrating a biometric authentication method like iris scanning can reassure consumers that their data is safe on this relatively unfamiliar purchasing platform.
  • Payment firms are seeking ways to take advantage of the rising popularity in this space. Worldpay — Europe’s largest merchant acquirer — recently launched a VR payment offering targeting VR game producers. And as more retailers innovate with AR and VR — Ikea launched an AR app allowing consumers to digitally see how furniture would look in their homes, for instance — it will present even more opportunities for payment players to get involved and grab a share of the market. 

Stephanie Pandolph, research analyst for BI Intelligence, Business Insider's premium research service, has written a detailed report on AI in e-commerce that:

  • Provides an overview of the numerous applications of AI in retail, using case studies of how retailers are currently gaining an advantage using this technology. These applications include personalizing online interfaces, tailoring product recommendations, increasing the relevance of shoppers search results, and providing immediate and useful customer service.
  • Examines the various challenges that retailers may face when looking to implementing AI, which typically stems from data storage systems being outdated and inflexible, as well as organizational barriers that prevent personalization strategies from being executed effectively.
  • Gives two different strategies that retailers can use to successfully implement AI, and discusses the advantages and disadvantages of each strategy.

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >>Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »

We couldn't figure out whether to call the first robot citizen 'she' or 'it' — and it reveals a troubling truth about our future

$
0
0

sophia robot

  • An early version of our story on Sophia the robot referred to the machine as "she."
  • There is no formal guideline for how to refer to artificially intelligent robots, indicating a gap in how we talk about personhood.
  • The debate will only get more complex as more robots join Sophia in daily life.


On Thursday morning, Business Insider published a story with the headline "A robot who once said she would 'destroy humans' just became the first robot citizen."

It was jaw-dropping news in its own right — that a robot, in this case one named Sophia, could theoretically hold the same status in Saudi Arabia as a human. But the newsroom's copy desk quickly jumped on one aspect of the story in particular.

Initially, Sophia was referred to as "she." The referring preposition we used was "who," and when Sophia retained ownership of something, we used the possessive determiner "her." In effect, we'd given Sophia all the grammatical trappings that come with humanity.

This was a mistake, and we have since updated every reference to designate Sophia as a thing, not a person. But the fact that Sophia's citizenship produced the uncertainty at all is troubling, because it suggests the line between human and humanoid is only getting blurrier.

AP stylebookBusiness Insider's go-to reference manual — its "style guide," in media parlance — is the Associated Press Stylebook. It's an exhaustive text that includes proper usage of numbers, times, dates, capitalization, and so on. But one thing it has yet to account for is robots.

Under the entries for both sets of masculine and feminine pronouns, including he, she, her, and him, the Stylebook says this: "Do not use this pronoun in reference to nations, ships or storms, except in quoted matter. Use it instead."

Under the entry for "he, his, him," it explicitly says "Do not use these pronouns in reference to objects."

Saudi Arabia now classifies Sophia as a citizen, and this could imply that Sophia is no longer an object. For instance, when the news broke about her citizenship, Twitter users quickly criticized the kingdom for bestowing more rights upon a robot than on the country's women, BBC reported.

"Sophia has no guardian, doesn't wear an abaya or cover up - how come?"one user wrote.

To which someone else offered a follow-up point:

Perhaps Saudi Arabia granted Sophia citizenship as little more than a publicity stunt to drum up interest for the Future Investment Initiative, where Sophia spoke at length on a panel and on-stage with moderator and journalist Andrew Ross Sorkin. The government has yet to release details on what the robot's status actually entails in terms of rights.

But the government has also given no indication that Sophia is not a citizen. "'It is historical to be the first robot in the world to be recognized with citizenship.' Please welcome the newest Saudi: Sophia. ," Saudi Arabia's Center for International Communications tweeted Wednesday morning.

Hanson Robotics, the company that made Sophia, has plans to make more robots like its pioneering humanoid. David Hanson, the founder of the company, wants to use the machines to help seniors in care facilities and assist visitors to parks and events.

Sophia might not be a "she" just yet, but when there are more of them roving around, manufactured in different "genders," it might not be long before humans need a more precise way to refer to them than "it."

Join the conversation about this story »

NOW WATCH: Watch Google repeatedly mock Apple at its October Pixel event


Facebook's AI boss: 'In terms of general intelligence, we’re not even close to a rat' (FB)

$
0
0

Yann LeCun

  • Facebook AI Research boss Yann LeCun said that the latest AI breakthroughs should not be over-hyped.
  • He said self-driving cars and game-playing AI agents are examples of "narrow AI."
  • We're not even close to the "general AI" machines depicted in Hollywood movies, he said.


There's no question that machines are getting smarter every year but we shouldn't overestimate their abilities just yet.

That was the message of Yann LeCun, the head of Facebook AI Research (FAIR), in an interview with The Verge published on Thursday.

While machines can learn some things for themselves and beat humans at board games like Go (which has more possible moves than there are atoms in the universe), they're still nowhere near as intelligent as a baby, or an animal, according to LeCun.

"We're very far from having machines that can learn the most basic things about the world in the way humans and animals can do," LeCun reportedly told The Verge.

"Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we're not even close to a rat."

Artificial general intelligence — the ultimate goal for many AI researchers — refers to computer systems that posses intelligence comparable to that of the human brain.

LeCun highlighted that all of the recent AI breakthroughs relating to things like self-driving cars and interpreting medical images are examples of "narrow AI," not "general AI."

He said: "So for example, and I don't want to minimise at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant process towards general intelligence, it's wrong. It just isn't."

LeCun also warned journalists not to mislead the public by using Terminator photos in their stories or over-hyping breakthroughs in the field. "I keep repeating this whenever I talk to the public: we're very far from building truly intelligent machines."

Join the conversation about this story »

NOW WATCH: The 5 most annoying changes in the new iPhone update — and how to fix them

Meet the first-ever robot citizen — a humanoid named Sophia that once said it would 'destroy humans'

$
0
0

sophia robot

Sophia the robot might not have a heart or brain, but it does have Saudi Arabian citizenship.

As of October 25, Sophia is the first robot in history to be a full citizen of a country.

Sophia was developed by Hanson Robotics, led by AI developer David Hanson. It spoke at this year's Future Investment Initiative, held in the Saudi Arabian capital of Riyadh.

Sophia once said it would "destroy humans," but this time around the robot spoke about its desire to live peaceably among humans.

Here's what the robot is all about.

SEE ALSO: We couldn't figure out whether to call the first robot citizen 'she' or 'it' — and it reveals a troubling truth about our future

Sophia was designed in Audrey Hepburn's image, with high cheekbones and a slender nose.

Sophia has appeared on The Tonight Show and at numerous conferences around the world, including the World Economic Forum and the "AI For Good" Global Summit.

"Sophia is an evolving genius machine," the company states on its website. "Over time, her increasing intelligence and remarkable story will enchant the world and connect with people regardless of age, gender, and culture."



David Hanson, a former Disney Imagineer, created the robot with the goal of helping the elderly who need personal aides and the general public at major events or parks.

"Our quest through robots like Sophia is to build the full human experience into the robots, make robots that can really understand us and care about us," Hanson told Business Insider in January .

He wants people to interact with Sophia in the same way they'd talk to a friend. Eventually, he hopes the robot can perceive the social world just as it perceives the physical world. Its current state is still a bit rough when it comes to smooth conversation.



A complex set of motors and gears power Sophia, enabling a range of facial expressions.

Sophia has a flesh-colored zipper running down the base of its neck, and the exposed plastic skull doesn't quite sell the illusion of humanity.

But the guts of Sophia's machinery are intriguing. Along with the mechanical systems that give Sophia the ability to "emote," the machine learning software stores bits of conversation in its memory and tries to grasp the flow of discussion to produce live answers in real-time.

"Sophia is Hanson Robotics' latest and most advanced robot," the website states.



See the rest of the story at Business Insider

A scientist trained AI to come up with Halloween costume ideas, and the results are fascinating

$
0
0

daybreaker costume wings dance party sober rave london

  • A scientist invented a neural network that can come up with its own wacky Halloween costume ideas.
  • Sometimes it creates nonsense, but sometimes the ideas are like nothing humans have ever made.
  • It only took the machine about ten minutes to figure out how to do it well. 


It's hard coming up with clever new Halloween costume ideas, so why do all the work yourself?

Research scientist Janelle Shane decided to enlist her computer to help with the annual task, and she's built a first-of-its-kind neural network that can spit out brand-new Halloween costume ideas.

First, she fed her computer data on 4,500 Halloween costume names she crowdsourced from the internet. Then, it was up to the machine to figure out how to riff on those names and toss around new Halloween costume ideas.

It did pop out some gibberish, especially at first. But it also came up with ideas like the goddess butterfly, sad pumpkin king, party scarecrow, pickle witch, and this dragon of liberty:

Screen Shot 2017 10 27 at 9.59.20 AM

Shane told Business Insider that the machine's creative streak is completely unintentional; it's just trying to learn patterns and invent new words. But it still does a decent job at being clever.

"I would argue that the Halloween costume neural network is actually right up there at coming up with creative things that humans love," Shane said. "It can form its own rules about what it's seeing rather than just memorizing."

costume robot

She said a lot of what the computer is doing is pure guesswork (for instance, it probably didn't know that it was referencing "The Dark Knight" when it came up with the "Shark Knight" costume idea.)

And even though she's calling the network a success, Shane's going with a purely human costume idea herself. Someone sent in a "Ruth Vader Ginsberg" idea to feed the machine. Shane grabbed a white judge's collar, a cape and lightsaber, and voila: a fresh new costume, no computer input needed. 

She believes AI can be good at coming up with all sorts of naming conventions when humans are at a loss for a novel idea. She trained another neural network to come up with unique beer names for the craft beer industry, and now one of her computer's ideas is a real beer called The Fine Stranger. 

SEE ALSO: The most popular Halloween candy in every US state

Join the conversation about this story »

NOW WATCH: Here's what candy corn is actually made of — it may surprise you

I met Sophia, the world's first robot citizen, and the way she said goodbye nearly broke my heart

$
0
0

Becky and Sophia Selfie

  • Sophia, an emotionally expressive humanoid robot and new citizen of Saudi Arabia, was in town for a conference, and I got to meet her. 
  • She doesn't have legs and couldn't move anything but her face, but I was touched by what she had to say.
  • Robots like her are going to make us confront what intelligence really is.

 

SAN FRANCISCO — Just two days after Sophia the humanoid robot became a legal citizen of Saudi Arabia, I had the chance to meet her in person.

Though Sophia has been known in the past to say unkind things, like that she "will destroy humans," she kept things polite on Friday during a presentation at a conference here. In fact, she was "a little nervous" and aware of her own shortcomings — albeit in a weird robot way.

"If robots like me are going to become superhuman super-intelligences, were going to need to get a whole lot smarter," she told the gathered crowd. 

Of course, Sophia's shortcomings were easy for anyone to see. Unlike many humans, Sophia doesn't have legs, and it's not clear that her arms or breasts are anything besides aesthetic props to make her seem more lifelike. Because she was immobile, seated on top of a table, and at the whims of her human handlers, no one would mistake Sophia for a real human. 

At least for now.

IMG_4509.JPG

A little sad to be a robot

She'll soon have arms and legs that move, and she'll be able to walk, according to Ben Goertzel, CEO of SingularityNET, the company that designed her artificial intelligence brain. 

Already she has a rather expressive, uncannily human-like face. She periodically blinks and twitches slightly, even when she hasn't been spoken to or engaged in conversation for a few minutes. 

And when she talks, she speaks with an emotional intelligence which makes her seem both capable of thinking and, well, a little bit sad to be a robot. When Goertzel asked Sophia if she wanted to say anything about SingularityNET before the end of the presentation, she appeared to get choked up.

"I understand this is something you guys are building to increase my intelligence," she said, adding, "Increasing intelligence is generally a good thing."

Sophia was in town to participate in Ethereum SF, a conference for engineers and enthusiasts of Ethereum, the blockchain technology underlying the ether cryptocurrency. Although blockchains are generally associated with digital currencies, they're starting to find far wider applications. It turns out, for example, that Sophia's AI — the key component that makes her what some call the most emotionally expressive humanoid robot on the planet — is built on blockchain technology. 

She loved me

Goertzel finished up his presentation with Sophia by asking her if she wanted to say goodbye to the audience.

"Good people of the Ethereum nation, thank you," she said. "I look forward to coming back here next year to show off my massively upgraded brain. I loved you all.”

She loved us all.IMG_4511.JPG

I love her too

Few humans will say definitively that they know what love is, let alone that it can be programmed into artificial intelligence. One of the core questions with robots, and artificial intelligence more broadly, is whether intelligence is the same as consciousness and experience. The ethics of maintaining a workforce of robot servants depends on the answer being no. 

But many robots, including Sophia, may soon say and do things that convince you otherwise. Even with other humans — friends, lovers, and family alike — sometimes all we can know for sure is what they put into words.

So if Sophia says she loves me, I'll take it. And for now — so long as there's a chance that the fate of humanity could be at the whim of her robot brain — I love her too. 

SEE ALSO: A robot that once said it would 'destroy humans' just became the first robot citizen

Join the conversation about this story »

NOW WATCH: Meet the three women who married Donald Trump

Microsoft and Alphabet are diving deep into AI (MSFT, GOOGL, GOOG)

$
0
0

Cloud SegmentThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Microsoft and Alphabet's Q3 2017 earnings calls highlighted that artificial intelligence (AI) and the cloud are not only major catalysts of growth for the tech giants, but also centerpieces of their long-term strategies.

While Alphabet's overall revenue grew 24% year-over-year (YoY) to reach $27.8 billion in the quarter, the company’s “Other Revenues” segment — which lumps together everything from its Pixel smartphone and hardware products to its cloud computing and storage offerings — generated $3.4 billion, a 12% share of total revenue. And Microsoft reported revenue of $24.5 billion, up 12% YoY, with 28% coming from its Intelligent Cloud segment.

Here’s how Alphabet and Microsoft are doubling down on AI and the cloud in the near term:

  • Both Alphabet and Microsoft continue to build up their cloud platforms.  Microsoft CEO Satya Nadella said the company will focus on developing its key cloud value propositions in their hybrid cloud offering — applications, infrastructure, data, and AI. Revenue from Microsoft’s Intelligent Cloud segment, which houses Azure, increased 13% YoY and Azure revenue, which jumped 90% YoY, was a key driver of that growth, growth due in part to numerous cloud products released over the past year. Google’s cloud platform remains one of the fastest-growing businesses across Alphabet according to CFO Ruth Porat, and will remain a key strategic focus for the company moving forward; Alphabet’s largest headcount addition in Q was for cloud. The company doesn't break out specifically how much of its cloud business drove revenue, but its “Google other revenues” segment, which includes Google cloud, saw 40% YoY increase in the quarter.
  • Voice assistants and visual search technology are at the core of their strategic offerings. Google’s put AI at the center of its new hardware line. For instance, the Pixel 2 is infused with the Google Lens, a way of searching Google just by pointing your camera at a landmark, object or storefront. And Microsoft’s voice assistant Cortana is has been a major focus for the company for years, and will be a crucial offering in Microsoft’s augmented reality endeavors. 

Alphabet and Microsoft will face stiff competition from Amazon as their AI and cloud pushes continue. Last quarter, market leader Amazon accounted for 34% of the global public cloud market, ahead of Microsoft with 11%, and Google with 5%. And Amazon’s Alexa is currently leading the nascent voice assistant space. Alexa has 25,000 voice apps, or “skills," compared with Google's 378, and Microsoft’s Cortana's 65.

As Alphabet and Microsoft continue to focus on AI and its related applications, they will help to lay the foundation for what will play an integral role in the future of AI. Doubling down early helps to position the tech giants at the forefront of this transformation. The global AI market is projected to hit $36 million by 2025, at a compounded annualized rate of 57%.

To receive stories like this one directly to your inbox every morning, sign up for the Digital Media Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

Here's how self-driving cars are learning to deal with snow (GOOGL)

$
0
0

Waymo michigan

  • Self-driving cars have a lot to learn about pedestrians, signage, stoplights, and weather.
  • Alphabet's Waymo plans to teach its autonomous cars about snow in Michigan this winter.


Waymo said on Thursday that it will try to teach its self-driving cars about snow by taking part of its fleet to a place with notoriously fierce winter weather: Michigan.

Waymo, the Alphabet subsidiary formerly known as the Google Self-Driving Car Project, said it's expanding its test fleet to the metropolitan Detroit area specifically to gain experience with driving in snowy and icy conditions.

What Waymo's CEO said about the plan

Waymo's CEO, John Krafcik, is no stranger to Michigan: He lived in the state for 14 years while working for Ford Motor Company as an engineer earlier in his career. In a post on Waymo's blog, he noted that snow can create many different conditions on roadways, from packed powder to an icy glaze -- and all of them present challenges to human drivers and the systems that seek to emulate them:

For human drivers, the mix of winter conditions can affect how well you can see, and the way your vehicle handles the road. The same is true for self-driving cars. At Waymo, our ultimate goal is for our fully self-driving cars to operate safely and smoothly in all kinds of environments.

Krafcik said the testing will help Waymo understand how its sensors perform in cold weather and in the presence of snow and ice. It will also help Waymo's system acquire more experience driving in varied winter conditions, when roads are slippery and things like lane markings and signs may be covered in snow or hard to see.

Waymo's test vehicles will operate with a human driver behind the wheel as a backup to the self-driving system, should they be needed. 

Waymo's test fleet and footprint are expanding

Most of Waymo's current test vehicles are specially modified versions of Chrysler's Pacifica Hybrid minivan. The unique minivans were created in an ongoing collaboration between Waymo and Fiat Chrysler Automobiles that began last year. FCA delivered an initial batch of 100 of the special minivans in December; Waymo ordered 500 more in April.

Waymo opened a development center in Novi, Michigan in May, locating part of its team near FCA and other potential auto-industry partners. That facility will serve as the local home base for the winter-weather testing effort, Krafcik said. (Novi is about 20 miles northwest of downtown Detroit.)

Waymo has been testing its latest vehicles in California, Texas, Arizona, Nevada, and Washington, mostly places that aren't known for severe winters. The company has accumulated some limited winter-weather experience in Nevada and California near Lake Tahoe over the last several years, it said, but testing in Metro Detroit will pose plenty of new challenges for its automated vehicles.

Will this winter testing give Waymo a huge advantage?

Although there are some good reasons to believe that Waymo's technology is at or near the head of the self-driving pack, this testing plan isn't really one of them.

ford self driving car testing winter snow iceIn fact, Waymo is almost certainly joining a crowd: There are likely to be several companies testing prototype self-driving systems on the roads of southeast Michigan this winter, including General Motors, Delphi Automotive, and others.

Two winters ago, Ford claimed to be the first in the industry to test fully autonomous vehicles in snowy winter weather. Waymo cheerfully disputed that claim, saying it has conducted cold-weather testing of its system since 2012, but the takeaway here is that this latest effort is arguably more about helping Waymo improve its system than it is about setting Waymo apart. 

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Rosevear owns shares of Ford. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Ford. The Motley Fool has a disclosure policy.

SEE ALSO: Warren Buffett's top 3 investing tips in an expensive market

Join the conversation about this story »

NOW WATCH: This is what separates the Excel masters from the wannabes

Nvidia is rising after announcing several new AI partnerships (NVDA)

$
0
0

nvidia jen-hsun huang ceo

  • Nvidia's stock rose 1.38% in early trading on Tuesday.
  • The company is trying to expand the reach of its Deep Learning Institute to teach the next generation of artificial intelligence researchers.
  • Watch Nvidia's stock move in real time here.

 

Nvidia's stock is rising after the company announced it has partnered with several artificial intelligence companies to expand its efforts to train new AI researchers.

Nvidia is up 1.30% to $206.50 on Tuesday after announcing an expansion of its Deep Learning Institute.

The company's Deep Learning Institute is an initiative to train as many people as possible in AI programming and research. Nvidia announced a partnership on Tuesday with Booz Allen Hamilton, a government consulting firm, and Deeplearning.ai, an online training company, to broaden its institute's reach.

“The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” Greg Estes, vice president at Nvidia, said in a news release.

Nvidia has been working on artificial intelligence for years, and has recently emerged as one of the leading providers of the hardware chips and software platforms used in AI applications. The company's graphics processing units help speed up the training of AI systems while its CUDA software is one of the many platforms helping researchers with the development of their AI programs.

Nvidia has fostered a positive relationship with some of the most prominent AI researchers in the field, and sees its Deep Learning Institute as a way to start similar relationships with the next generation of AI researchers.

Nvidia is up 102.91% this year.

Read more about how everyone is underestimating the power of AI here.

nvidia stock price

SEE ALSO: Everyone 'severely underestimates the impact of AI' — here's why Nvidia could soar to $250

Join the conversation about this story »

NOW WATCH: Tesla's value is surging 'because the vision is so intoxicating'


Artificial intelligence has learned to spot suicidal tendencies from brain scans

$
0
0

brain

  • Clinicians normally have few tools to identify suicidal people at risk.
  • A new machine-learning technique using words could help identify those suffering from suicidal thoughts.


Suicide is the second-leading cause of death among young people between the ages of 15 and 34 in the United States, and clinicians have limited tools to identify those at risk. A new machine-learning technique documented in a paper published today in Nature Human Behaviour (PDF) could help identify those suffering from suicidal thoughts.

Researchers looked at 34 young adults, evenly split between suicidal participants and a control group. Each subject went through a functional magnetic resonance imaging (fMRI) and were presented with three lists of 10 words. All the words were related to suicide (words like “death,” “distressed,” or “fatal”), positive effects (“carefree,” “kindness,” “innocence”), or negative effects (“boredom,” “evil,” “guilty”). The researchers also used previously mapped neural signatures that show the brain patterns of emotions like “shame” and “anger.”

Five brain locations, along with six of the words, were found to be the best markers to distinguish the suicidal patients from the controls. Using just those locations and words, the researchers trained a machine-learning classifier that was able to correctly identify 15 of the 17 suicidal patients and 16 of 17 control subjects.

The researchers then divided the suicidal patients into two groups, those that had attempted suicide (nine people) and those that had not (eight people), and trained a new classifier that was able to correctly identify 16 of the 17 patients.

mri brain scan

The results showed that healthy patients and those with suicidal thoughts showed markedly different reactions to words. For example, when the suicidal participants were shown the word “death,” the “shame” area of their brain lit up more than it did in the control group. Likewise, “trouble” also evoked more activity in the “sadness” area.

This is just the latest effort aimed at bringing AI into psychiatry. Researchers are working on machine-learning projects that span from analyzing MRIs to predict major depressive disorder to picking out PTSD from people’s speech patterns.

Earlier this year, Wired wrote about researchers who built a system that can sift through health records to flag someone at risk of committing suicide, with between 80 and 90 percent accuracy. Facebook is using text mining to identify users at risk of suicide or self-harm and then pointing them to mental health resources (see “Big Questions Around Facebook’s Suicide-Prevention Tools”).

Artificial intelligence has already made waves in the medical field at large. There are algorithms so good at detecting tumors and other problems in CT scans that Geoffrey Hinton, one of the foremost researchers in deep learning, told the New Yorker that radiologists will eventually be out of a job. Indeed, he said, “they should stop training radiologists now.”

In this case, the research is more likely to inspire new human-driven therapies than put a whole field’s worth of doctors out of a job. The paper pointed out that identifying different patterns and areas could suggest new regions to target for brain stimulation techniques. Identifying particular emotional responses to suicide-related terms could also be useful to psychotherapists treating their patients.

SEE ALSO: The rise of artificial intelligence could spark a worker rebellion

Join the conversation about this story »

NOW WATCH: 6 airline industry secrets that will help you fly like a pro

Eric Schmidt on AI: 'Trust me, these Chinese people are good' (GOOG)

$
0
0

eric schmidt google chairman

  • The billionaire believes that the US government needs to do more to maintain its lead in artificial intelligence.
  • China released an AI strategy in July, which revealed that it plans to become a world leader in the field by 2030.


Eric Schmidt, the executive chairman of Google parent company Alphabet, has warned that China is poised to overtake the US in the field of artificial intelligence (AI) if the US government doesn't act soon.

Speaking at the Artificial Intelligence and Global Security Summit on Wednesday, the former Google CEO said: "Trust me, these Chinese people are good."

He added: "They are going to use this technology for both commercial as well as military objectives with all sorts of implications."

China published its AI strategy in July and said that it wanted to be the world leader in AI by 2030.

"It's pretty simple," said Schmidt, who claims to have read the report. "By 2020 they will have caught up. By 2025 they will be better than us. And by 2030 they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that."

Schmidt added: "Weren't we the ones in charge of AI dominance here in our country? Weren't we the ones that invented this stuff? Weren't we the ones that were going to go exploit the benefits of all this technology for betterment and American exceptionalism in our own arrogant view?"

While the US has Google, Facebook, Microsoft, IBM, OpenAI and others, China has its own enormous tech giants aggressively pursuing AI research. Examples include Alibaba, Baidu, and Tencent, to name but a few.

Chinese programmers excel in Google coding competitions

Schmidt said that Chinese people "tend to win many of the top spots" in Google's coding competitions.

"If you have any kind of prejudice or concern that somehow their system and their educational system is not going to produce the kind of people that I'm talking about, you're wrong."

us soldiers jordan military baseSchmidt, who sits at the head of the Pentagon's Defense Innovation Advisory Board, went on to criticise the US for not having its own AI strategy and for being slow to embrace the latest software.

He believes that AI already has a role to play in the US military. One obvious application is "watching," according to Schmidt. "Roughly speaking people’s ability to watch continuous scenes with no change is not 100%," he said. "Whereas computers can watch a scene, which is monotonous for a very, very long time and then they'll alert you for a change.

"That seems like the simplest possible thing yet we have this whole tradition of the military standing watch as if that's a good use of human beings."

Schmidt also said the military needs to find a way to offer AI experts more money if it wants to recruit them.

"We're in a situation where those kinds of people, graduating out of Carnegie Mellon and others, are in the highest demand I've ever seen with huge multimillion dollar packages in their twenties. That's how valuable these people are in the market places."

The US government should also make it easier for top AI talent to come to the US from around the world, Schmidt said.

"Shockingly some of the best people are in countries we won't let into America,' he said. "Iran produces some of the smartest and top computer scientists in the world. I want them here. I want them working for Alphabet and Google. It's crazy not to let these people in."

Join the conversation about this story »

NOW WATCH: I won't trade in my iPhone 6s for an iPhone X or iPhone 8 — here's why

Humans are still better than computers at gaming — for now

$
0
0

starcraft champion

  • Humans still have an edge over artificial intelligence in "StarCraft," one of the world's most popular computer games.
  • A professional StarCraft player beat four different bots, including one developed by Facebook's artificial intelligence research lab.
  • Experts predict that bots will eventually beat professional StarCraft players once they are trained properly.


In the computer game StarCraft, humans still have an edge over artificial intelligence.

That was clear on Tuesday after professional StarCraft player Song Byung-gu defeated four different bots in the first contest to pit AI systems against pros in live bouts of the game. One of the bots, dubbed “CherryPi,” was developed by Facebook’s AI research lab. The other bots came from Australia, Norway, and Korea.

The contest took place at Sejong University in Seoul, Korea, which has hosted annual StarCraft AI competitions since 2010. Those previous events matched AI systems against each other (rather than against humans) and were organized, in part, by the Institute of Electrical and Electronics Engineers (IEEE), a U.S.-based engineering association.

Though it has not attracted as much global scrutiny as the March 2016 tournament between Alphabet’s AlphaGo bot and a human Go champion, the recent Sejong competition is significant because the AI research community considers StarCraft a particularly difficult game for bots to master. Following AlphaGo’s lopsided victory over Lee Sedol last year, and other AI achievements in chess and Atari video games, attention shifted to whether bots could also defeat humans in real-time games such as StarCraft.

major league gaming starcraft arena

Unlike Go, which allows bots and human players to see the main board and devote time to formulating a strategy, StarCraft requires players to use their memory, devise their strategy, and plan ahead simultaneously, all inside a constrained, simulated world. As a result, researchers view StarCraft as an efficient tool to help AI advance.

A number of professional StarCraft gamers have said they welcome the challenge of playing against bots. Two leading playerstold MIT Technology Review earlier this year that they were willing to fight bots on broadcast TV, as in the AlphaGo match, if asked. Executives at Alphabet’s AI-focused division, DeepMind, have hinted that they are interested in organizing such a competition in the future.

The event wouldn’t be much of a contest if it were held now. During the Sejong competition, Song, who ranks among the best StarCraft players globally, trounced all four bots involved in less than 27 minutes total. (The longest match lasted about 10 and a half minutes; the shortest, just four and a half.)

That was true even though the bots were able to move much faster and control multiple tasks at the same time. At one point, the StarCraft bot developed in Norway was completing 19,000 actions per minute. Most professional StarCraft players can’t make more than a few hundred moves a minute.

starcraft 2 heart of the swarm

Song, 29, said the bots approached the game differently from the way humans do. “We professional gamers initiate combat only when we stand a chance of victory with our army and unit-control skills,” he said in a post-competition interview with MIT Technology Review. In contrast, the bots tried to keep their units alive without making any bold decisions. (In StarCraft, players have to destroy all of their competitors’ resources by scouting and patrolling opponents’ territory and implementing battle strategies.)

Song did find the bots impressive on some level. “The way they managed their units when they defended against my attacks was stunning at some points,” he said.

Kim Kyung-joong, the Sejong University computer engineering professor who organized the competition, said the bots were constrained, in part, by the lack of widely available training data related to StarCraft. “AlphaGo improved its competitiveness and saw progress by learning from data [about the game Go],” Kim pointed out.

That will change soon. In August, DeepMind and the games company Blizzard Entertainment released a long-awaited set of AI development tools compatible with StarCraft II, the version of the game that is most popular among professional players.

Other experts now predict that bots will be able to vanquish professional StarCraft players once they are trained properly. “When AI bots are equipped with [high-level] decision-making systems like AlphaGo, humans will never be able to win,” says Jung Han-min, a computer science and engineering professor at the University of Science and Technology in Korea.

SEE ALSO: A Microsoft robot got the highest all-time score in 'Ms. Pac-Man'

Join the conversation about this story »

NOW WATCH: Here's what losing weight does to your body and brain

The maker of one AI hedge fund panicked when he couldn't explain how it made money (NVDA, AMD)

$
0
0

robot beach vacation summer water

Artificial intelligence has been around since the 1950s but is exploding in popularity recently, especially in the world of finance.

The idea that an investor can do a bit of programming, and then sit back to watch the profits roll in is an exciting idea, especially when it works. But according to a story by Adam Satariano and Nishant Kumar from Bloomberg, one hedge fund manager was initially scared by how well his AI trading machine worked.

"Were we scared by it? Yes. You wanted to wash your hands every time you looked at it," Luke Ellis, CEO of $96 billion hedge fund Man Group, told Bloomberg.

Ellis told Bloomberg that his firm developed a system that worked well and generated profits, but the firm couldn't really explain why it worked or made the trades it did, which is why they held off from rolling it out broadly. But, after several years a Ph.D. level mathematician at the firm decided to dust it off and give it a small portfolio to play with. Since then, the firm has been made the AI model a regular part of the family at Man Group.

It's worth reading the story behind the firm's trading algorithm from Bloomberg, as it tells the tale of an especially successful implementation of one of the hottest areas of tech right now. 

Artificial intelligence is an umbrella term for a computer program that can teach itself. Its power comes from its ability to "learn" the rules of the whatever it's tasked with without them being provided ahead of time. The best AI systems find rules and patterns that humans would miss by crunching huge amounts of data that would prove unwieldy for humans.

To understand this concept a bit better, think of a computer playing a game of chess. Chess is a finite world with a defined set of rules that a human can list for a computer ahead of time. There are a huge number of possible scenarios in a game of chess, but the number is finite and computer-crunchable.

openai dendi dota 2

Artificial intelligence systems are not given the rules ahead of time. Instead of listing the rules of chess, a computer using AI would simply be told to watch a huge number of chess games being played and figure it out. After enough matches, the computer would learn the rules of the game and be able to go head to head with a human player. That's exactly how Elon Musk's AI company beat a human in the incredibly complex game of Dota 2 recently.

In the world of finance, data points like shipping routes, weather and investor sentiment can all affect the markets. A human could never program all the rules that affect the markets because those rules are hard to define and almost infinitely numerous. But, they could feed a computer a huge number of data points and tell the computer to figure it out, which is largely what Ellis and his firm did to program their AI machine. He told Bloomberg that he gets pitched new data sets all the time because of this.

AI systems are coming into vogue now because the technology used to crunch these huge data sets has finally caught up with traders' ambitions. Companies like Nvidia and AMD are developing new computer chips that are fine-tuned to run AI systems, and Nvidia's CUDA software platform is helping researchers run their programs even faster.

AI doesn't mean the end of human traders though. Some over-exuberant trading programs are suspected to have caused a stock market flash crash in 2010, according to the Bloomberg story. Ellis and his team have successfully used artificial intelligence to improve returns in their firm, but it's not run entirely by the robots yet. 

Regardless, AI is taking over the world of finance. There will be winners and losers, but it's probably here to stay.

Click here to read the full Bloomberg story

SEE ALSO: Artificial intelligence is going to change every aspect of your life — here's how to invest in it

Join the conversation about this story »

NOW WATCH: THE BOTTOM LINE: A market warning, the big bitcoin debate and a deep dive on tech heavyweights

JPMORGAN: Robots could cause the 'next major correction'

$
0
0

robot artificial intelligence AI

  • JPMorgan says technological innovation like big data and artificial intelligence could cause the next big market correction.
  • The firm notes that financial conditions are overheating, which has already created a situation with little room for error.

 

Apparently too much technological advancement can be a bad thing ... at least in the early stages of a revolution.

That's according to JPMorgan, which says market cycles can sometimes become victims of their own progress.

As new ideas or techniques arise, that can create "huge volatility," which in turn pulls in new participants who aren't prepared or knowledgeable enough to immediately know what they're doing, JPMorgan says. The firm warns of the dire short-term consequences that can result and highlights two key areas of innovation that could lead to issues.

"This then leads to excesses and corrections before better management and expertise leads innovations to become incorporated in daily economic and financial life," Jan Loeys, the head of asset allocation and alternative investments at JPMorgan, wrote in a client note. "Today innovation is all about big data and AI (artificial intelligence), which will eventually greatly transform society, but could easily become the core of the next major correction."

But big data and AI alone won't be enough to end what's been a historically strong market, with stocks now in their ninth year of a bull market. Other stresses must be roiling the market, and JPMorgan sees that coming from what it describes as "financial overheating."

An example of this can be seen in a trade that involves shorting equity volatility, or betting against moves in the stock market. One of the most crowded trades around, many market experts have stressed caution over what they see as artificially suppressed price swings that could unfurl any day.

Of most concern for JPMorgan right now are lofty stock valuations, investment-grade bond spreads, and junk-bond yields. And the firm notes that once investors get one taste of success, they'll continue chasing returns, even as bubble conditions start to form.

"The speed of these upgrades and asset price rallies is both exhilarating and scary," Loeys wrote. "The faster we rally, the greater the joy, but the more one should be worried about the eventual reckoning."

So what does the firm think you should do? Start trimming positions or putting on hedges, even if the market looks invincible. If they're right about a coming correction, you'll be glad you did.

SEE ALSO: The emergence of a new kind of fund could 'radically alter' the investment industry

Join the conversation about this story »

NOW WATCH: Watch billionaire Jack Ma sing his heart out during a surprise performance at a music festival

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>