Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

San Francisco streets are so dirty, the city is spending nearly $300,000 on trash bins that send a warning when they’re overflowing

$
0
0

San Francisco trash

  • San Francisco is installing sensors on 1,000 trash bins in an effort to curb littering and illegal dumping.
  • The city's downtown neighborhoods are filled with needles and garbage, creating conditions that many have compared to the world's poorest slums. 
  • The sensors help optimize waste collection by steering city officials in the direction of trash cans that are overflowing.

San Francisco's streets are notoriously dirty.

In neighborhoods like the Tenderloin, Chinatown, and Mission District, it's common to find garbage and human feces scattered along the sidewalk. On the filthiest street in San Francisco, passersby can encounter an outdoor drug market with piles of poop and used heroin needles. 

But not every street contains the same amount of trash — a fact that could be critical to achieving clean sidewalks in the future. 

Read more:San Francisco's downtown area is more contaminated with drug needles, garbage, and feces than some of the world's poorest slums

This spring, San Francisco will equip 1,000 trash cans in major commercial corridors with small black sensors that detect whether the can is full and warn city officials before the bin overflows. The technology will prevent waste collectors from having to stop at every trash can, only to discover that some are empty. 

Nordsense sensor

The idea was first tested last year via a three-month pilot program, which found that the sensors reduced overflowing cans by 80%, reduced street cleanings by 66%, and all but eliminated public complaints about swelling trash bins. 

The company developing the sensors, Nordsense, runs a similar program in Copenhagen, where it discovered that the vast majority of trash bins attended to by the city weren't actually full.

"You can see the same pattern for almost every city in the world," said Anders Engdal, the company's CEO. 

Nordsense's three-year contract with the city of San Francisco sets aside $294,000 for installation, parts, and the annual cost of maintaining and monitoring the trash bins. 

The difference between San Francisco and Copenhagen, Engdal said, is that San Francisco residents are prone to illegal dumping, such as leaving an old appliance outside or tossing a couch on the side of the road. Between October 2017 and September 2018, the San Francisco Public Works Department and the city's waste collection service, Recology, responded to more than 82,000 complaints of illegal dumping. 

Engdal said overflowing trash bins are a sure-fire sign that illegal dumping will soon follow. They're also a signal to the rest of the neighborhood that it's okay to litter. (The idea behind this phenomenon, known as the "broken windows" theory, essentially says that civil disorder encourages more civil disorder down the line.)

Abandoned couch san francisco

No matter their size, cities can be better at using their existing infrastructure "as opposed to throwing more money at it," said Nordsense founder Manuel Maestrini.

If waste collectors aren't busy stopping at clean neighborhoods, he said, they'll have more time to manage ones that are overwrought with garbage. Fewer trucks on the road could also reduce vehicle emissions and traffic congestion. 

In the future, Maestrini said, the sensors could solve other problems like monitoring storm drains that have clogged due to flooding. Given San Francisco's vulnerability to sea level rise, the city might benefit from that as well. 

SEE ALSO: San Francisco has a 'Poop Patrol' to deal with its feces problem, and workers make more than $184,000 a year in salary and benefits

Join the conversation about this story »

NOW WATCH: A molecular biologist warns chemicals in plastic can seep into food and lead to major health effects like obesity, heart disease, and diabetes


Alexandria Ocasio-Cortez isn't afraid of the rise of robots, but she agrees with Bill Gates that they should be taxed for taking jobs

$
0
0

Alexandria Ocasio-Cortez

  • Democratic Rep. Alexandria Ocasio-Cortez has an unusual take on the risk of robots taking over US jobs.
  • The conventional wisdom is that business owners who replace human workers with robots will accumulate wealth — something a left-leaning politician would usually be opposed to.
  • Speaking at SXSW, Ocasio-Cortez said people should be "excited" for automation because it would leave people free to be more creative.
  • She also thinks robots, or more precisely robot owners, should be taxed for replacing jobs.

Speaking at SXSW over the weekend, Democratic Rep. Alexandria Ocasio-Cortez voiced her view that people shouldn't be worried that automation might be about to wipe out their jobs.

It's an unusual take, because the perceived wisdom is that rising automation will adversely impact both blue- and white-collar jobs, laying waste to factory workers, lawyers, and anyone else who does repetitive work that could easily be automated.

The thinking goes that these people will either need swift retraining or will need to survive on government handouts. Meanwhile, the moguls who actually own the robots will accumulate wealth by laying off expensive human workers in favour of cheap robot workers. Think Jeff Bezos — already the world's richest man — running an incredibly profitable Amazon, which is already being heavily automated by robots.

Read more:Alexandria Ocasio-Cortez leaned into her socialist image at SXSW, saying 'capitalism is irredeemable'

A left-leaning politician like Ocasio-Cortez should, conventionally, be sceptical about the rise of AI. Indeed, one would-be Democratic presidential candidate, businessman Andrew Yang, has issued doomsday predictions that AI will "destabilise society."

amazon warehouse robot

But Ocasio-Cortez, leaning into her socialist credentials, wants to retool our entire way of thinking about work.

"We should not be haunted by the specter of being automated out of work. We should not feel nervous about the tollbooth collector not having to collect tolls. We should be excited by that. But the reason we’re not excited about it is because we live in a society where if you don’t have a job, you are left to die," she said.

"We should be excited about automation, because what it could potentially mean is more time educating ourselves, more time creating art, more time investing in and investigating the sciences, more time focused on invention, more time going to space, more time enjoying the world that we live in. Because not all creativity needs to be bonded by wage."

Wealthy robot owners, she added, could be taxed by as much as 90%. Ocasio-Cortez cited comments by Microsoft founder Bill Gates, who suggested that robots should pay the taxes for the people they replace, although he didn't name a precise figure. "What [Gates is] really talking about is taxing corporations at 90%, but it’s easier to say: 'Tax a robot,'" Ocasio-Cortez said.

It would require a total reimagining of work and leisure time. But Ocasio-Cortez isn't the only person to argue that automated labour would free humans up to do whatever they like — normally something worthwhile or creative.

Toby Walsh, a professor of AI at the University of New South Wales, has argued it's impossible to know how many jobs might be adversely impacted by AI. He disputed one 2013 study from Oxford stated that 47% of all US employment was at risk.

He wrote in The Guardian: "The AI revolution then will be about rediscovering the things that make us human. Technically, machines will have become amazing artists. They will be able to write music to rival Bach, and paintings to match Picasso. But we’ll still prefer works produced by human artists.

"These works will speak to the human experience. We will appreciate a human artist who speaks about love because we have this in common. No machine will truly experience love like we do.

You can watch Alexandria Ocasio-Cortez's SXSW session here:

Join the conversation about this story »

NOW WATCH: Reddit cofounder Alexis Ohanian isn't worried about Elon Musk's vision of 'Terminator'-like robots taking over humanity

Facebook is using AI to crack down on 'revenge porn' (FB)

$
0
0

Sheryl Sandberg

  • Facebook is using AI to try and crack down on "revenge porn."
  • The social network plans to use machine learning tools to find and automatically flag nonconsensually shared intimate photos, it announced on Friday.
  • Facebook has faced a steady drumbeat of criticism over the years for its approach to content moderation.

Facebook is rolling out technology to make it easier to find and remove intimate pictures and videos posted without the subject's consent, often called "revenge porn."

Currently, Facebook users or victims of revenge porn have to report the inappropriate pictures before content moderators will review them. The company has also suggested that users send their own intimate images to Facebook so that the service can identify any unauthorized uploads. Many users, however, balked at the notion of sharing revealing photos or videos with the social-media giant, particularly given its history of privacy failures.

The company's new machine learning tool is designed to find and flag the pictures automatically, then send them to humans to review.

Facebook and other social media sites have struggled to monitor and contain the inappropriate posts that users upload, from violent threats to conspiracy theories to inappropriate photos.

Facebook has faced harsh criticism for allowing offensive posts to stay up too long, for not removing posts that don't meet its standards and sometimes for removing images with artistic or historical value. Facebook has said it's been working on expanding its moderation efforts, and the company hopes its new technology will help catch some inappropriate posts.

The technology, which will be used across Facebook and Instagram, was trained using pictures that Facebook has previously confirmed were revenge porn. It is trained to recognize a "nearly nude" photo — a lingerie shot, perhaps — coupled with derogatory or shaming text that would suggest someone uploaded the photo to embarrass or seek revenge on someone else.

At least 42 states have passed laws against revenge porn. Many such laws came up in the past several years as posting of non-consensual images and videos has proliferated. New York's law, which passed in February, allows victims to file lawsuits against perpetrators and makes the crime a misdemeanor.

Facebook has been working to combat the spread of revenge porn on its site for years, but has largely relied on people proactively reporting the content up until now. But that means by the time it's reported, someone else has already seen it, chief operating officer Sheryl Sandberg said in an interview with The Associated Press. And it's often tough and embarrassing for a victim to report a photo of themselves.

"This is about using technology to get ahead of the problem," Sandberg said.

Facebook still sees user-contributed photos as one way to address the problem, and says it plans to expand that program to more countries. It allows people to send in photos they fear might be circulated through encrypted links. Facebook then creates a digital code of the image so it can tell if a copy is ever uploaded and deletes the original photo from its servers.

The company does not expect the new technology to catch every instance of revenge porn, and said it will still rely on users reporting photos and videos.

Join the conversation about this story »

NOW WATCH: Why Tesla's Model X was the first SUV to receive a perfect crash-test rating

AI experts are studying the way that kids' brains develop, and it could be a game-changer for the technology (GOOG, GOOGL)

$
0
0

American professor of psychology Alison Gopnik attends a photocall at Edinburgh International Book Festival at Charlotte Square Gardens on August 26, 2016 in Edinburgh, Scotland.

  • Computer scientists developing artificial intelligence want their technology to be more like a child's brain.
  • Children's brains are great at collecting information and learning from cues in the world around around them, something that AI systems struggle with.
  • A cognitive development expert, along with AI specialists from Alphabet's DeepMind and Stanford discussed how a better understanding of the child's brain could provide the blueprint for the next generation of AI. 

STANFORD, California — At first take, it might not be clear why an expert in children's cognitive development was a featured speaker at a conference on artificial intelligence.

It turns out that kids — and those who understand how they learn — may have a lot to teach the experts in artificial intelligence about how to improve their systems.

AI systems have gotten good and swallowing gobs of data and using it to make predictions based on all that information, said Alison Gopnik on Monday during a panel discussion at an event here sponsored by the Stanford Institute for Human-Centered Artificial Intelligence. But they're not very good at generalizing from small amounts of data, said Gopnik, a professor who studies cognitive development at the University of California, Berkeley. Nor are they good at collecting data on their own to make those generalizations or at learning about the world from the cues given by other intelligent entities around them, she added.

But babies and young children excel at all those things, she said.

"So those three things — model building, exploration, social learning — are some clues to how children can learn so much, and those are things that are just at the beginning in terms of what AI can do," said Gopnik, the author of "The Scientist in the Crib: What Early Learning Tells Us About the Mind."

Read this: AI could soon be all around us — here's how that could upend 8 different industries

AI researchers are already using what psychology experts such as Gopnik have discovered about the way children learn and applying them to their field. Gopnik herself is working with researchers at her university to develop AI systems that are meant to be curious, like kids. They're designed to go out and collect data on their own, she said. 

One of the key insights that's helped AI research advance so rapidly in the last five to 10 years has been the realization that they needed to design an actual curriculum, a teaching program for their systems, said Demis Hassabis, cofounder of Alphabet-owned AI lab DeepMind, who sat on the same panel as Gopnik. They couldn't expect their systems to master tasks immediately, but had to allow the systems to build up to those tasks incrementally by mastering steps along the way, Hassabis said.

"You can't just go from zero to one," he said. "You actually need to do easier versions of the task and build up in the way that we teach children."

One of the promises of patterning AI after the way children's brains develop is that the technology could be much more efficient. Instead of relying on huge data sets and lots of computing power to make sense of the world, such child-like AI systems could potentially rely on much less data and power, said Chris Manning, a professor of linguistics and computer science at Stanford's Artificial Intelligence Laboratory.

"You can get this orders of magnitude more efficient learning," Manning said.

Got a tip? Contact this reporter via email at twolverton@businessinsider.com, message him on Twitter @troywolv, or send him a secure message through Signal at 415.515.5594. You can also contact Business Insider securely via SecureDrop.

SEE ALSO: AI is great at recognizing nipples, Mark Zuckerberg says

Join the conversation about this story »

NOW WATCH: The US won't let Huawei, China's biggest smartphone maker, enter the US market

Here's the pitch deck AI startup Skymind just used to scoop up $11.5 million in funding

$
0
0

Chris Nicholson, founding CEO of artificial-intelligence startup Skymind, from March 2019.

  • Skymind just closed an $11.5 million Series A funding round.
  • The San Francisco-based company offers a set of software that helps companies build artificial intelligence systems.
  • It plans to use the new funds to build out its sales team.
  • The pitch deck it used to raise the new money is below.

Artificial intelligence is probably the most important new development in the tech industry, but many companies are struggling to embrace it.

Chris Nicholson thinks his startup can help.

One of the things that's holding companies back from AI is that the programs that house their data and the software tools for analyzing the data are written in two different coding languages that don't easily talk to each other. That's where Skymind, the San Francisco startup where Nicholson is CEO, comes in.

Dubbed Skymind Intelligence Layer, or SKIL,  the product allows companies' nascent AI systems to build models that tie the disparate pieces together and which can be easily incorporated into their own products. SKIL is built around a set of open-source applications that Skymind has packaged for enterprises. The startup offers it in classic fashion, charging corporate clients seeking support for the software.

"You can can think of us as a Red Hat for AI," Nicholson said, referring to the company that built a business around distributing the Linux operating systems to corporations.

Founded in 2014, Skymind already has some notable customers, including SoftBank, ServiceNow, and French wireless company Orange. It could soon have a lot more.

On Wednesday, the company announced that it has closed on a $11.5 million Series A funding round led by TransLink Capital, bringing the total amount it's raised to $17.9 million. In addition to using the new funds to beef up its engineering staff, the company plans to use the money to build out a sales team to help market its software.

"We didn't have a single salesperson until this round," Nicholson said.

Here's the pitch deck Nicholson and his team used to solicit its Series A funds:

SEE ALSO: AI experts are studying the way that kids' brains develop, and it could be a game-changer for the technology







See the rest of the story at Business Insider

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue streams, and adapt to the digital age (TM, LYFT)

$
0
0

This is a preview of a research report from Business Insider Intelligence. To learn more about Business Insider Intelligence, click here. Current subscribers can log in and read the report here.

New technology is disrupting legacy automakers' business models and dampening consumer demand for purchasing vehicles. Tech-mediated models of transportation — like ride-hailing, for instance — are presenting would-be car owners with alternatives to purchasing vehicles.Screen Shot 2019 03 25 at 11.54.51 AM

In fact, a study by ride-hailing giant Lyft found that in 2017, almost 250,000 of its passengers sold their own vehicle or abandoned the idea of replacing their current car due to the availability of ride-hailing services.

Artificial intelligence (AI) is one technology automakers can turn to in order to adapt to this changing landscape. AI will create significant opportunities for automakers to both reduce production costs and introduce new revenue streams, including self-driving technology, predictive maintenance, and route optimization.

This will enable automakers to take advantage of what will amount to billions of dollars in added value. For example, self-driving technology will present a $556 billion market by 2026, growing at a 39% CAGR from $54 billion in 2019, per Allied Market Research.

But firms face some major hurdles when integrating AI into their operations. Many companies are not presently equipped to begin producing AI-based solutions, which often require a specialized workforce, new infrastructure, and updated security protocol. As such, it's unsurprising that the main barriers to AI adoption are high costs, lack of talent, and lack of trust. Automakers must overcome these barriers to succeed with AI-based projects.

In The AI In Transportation Report, Business Insider Intelligence will discuss the forces driving transportation firms to AI, the market value of the technology across segments of the industry, and the potential barriers to its adoption. We will also show how some of the leading companies in the space have successfully overcome those barriers and are using AI to adapt to the digital age.    

Here are some key takeaways from the report:

  • Automakers can use AI to adapt to a changing transportation landscape, as it offers opportunities to both decrease production costs and create new revenue streams.
  • Major auto companies are already adopting the technology in order to capitalize on the benefits it is expected to provide — Toyota launched a venture capital subsidiary in 2017, and Volkswagen has well over 100 AI applications running in trial projects across its 120 plants.
  • By 2025, AI is expected to provide $173 billion in cost savings across the entire automotive OEM supply chain, ranging from procurement to research and development, according to McKinsey. 
  • Self-driving technology will be the biggest opportunity AI creates in the transportation space: It will present a $556 billion opportunity by 2026, growing at a 39% CAGR from $54 billion in 2019, per Allied Market Research. 
  • However, costs will still be a major barrier to adoption — more than half (53%) of global business and IT leaders cited the high costs associated with AI technology as a major deterrent to adoption, according to a survey conducted by MIT Technology Review.

In full, the report:

  • Identifies the forces that are turning automakers toward AI as they attempt to move into the digital age. 
  • Pinpoints the segments of the OEM value chain where AI will have the biggest impact. 
  • Explores the biggest challenges firms face in implementing their AI adoption strategy. 
  • Highlights key players in the transportation space that are positively leveraging AI. 
  • Discusses how firms can overcome the major barriers to adoption in order to fully capitalize on the use of AI in their operations. 

Interested in getting the full report? Here are two ways to access it:

  1. Purchase & download the full report from our research store. >> Purchase & Download Now
  2. Subscribe to a Premium pass to Business Insider Intelligence and gain immediate access to this report and more than 250 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >> Learn More Now

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in transportation.

Join the conversation about this story »

We tried the $300 smart security camera that was cool enough to get on Apple's radar. Here’s what it was like to use. (AAPL)

$
0
0

Lighthouse camera

  • The Lighthouse security camera was a $300 internet-connected device that could identify you and your family members.
  • But Lighthouse shut down its operations last December and sold its patents to Apple. Now, Lighthouse's founders and 20 of its staff are joining Apple, according to a report from The Information
  • We tried the Lighthouse camera before the company shut down, and it provided a glimpse into the type of technology know-how the company will bring to Apple. 

Hardware startup Lighthouse had a straightforward goal with its first-ever product: make a home security camera that's smarter than anything else on the market.  

The result was the $300 Lighthouse camera, an AI-powered, internet connected smart security camera. It could identify you and your family members, alert you when there are intruders in your home, and understand commands like, "Did the dog walker come today?"

But Lighthouse never caught on with customers, and the company shut down operations last December. Since then, it has stopped selling its camera and sold some of its patents to Apple. Now, the company's founders and 20 of its staff are joining Apple, according to a report from The Information.

We tried the Lighthouse camera last year, well before the company shut down. While the camera is no longer available to buy, it's a solid indication of the type of technology the company's founders have brought to Apple. 

Here's how it worked:

SEE ALSO: A complete guide to the Amazon Echo family, the smart speakers that will change your home forever

Lighthouse was founded in 2014 by Hendrik Dahlkamp and Alex Teichman, who met while working in Udacity founder Sebastian Thrun's lab at Stanford University. Lighthouse later joined Playground Global, an incubator run by Android creator Andy Rubin.



The Lighthouse camera was the startup's first and only product. When building it, Lighthouse wanted to "take a traditional camera and give it the eyes of a self-driving car, and give it the natural language understanding of a Google Assistant," Teichman told Business Insider.



Teichman described traditional security cameras versus the Lighthouse camera as "going from VCR to TiVo."



See the rest of the story at Business Insider

Over a thousand Google employees have signed a petition calling for the removal of a member of Google's new AI ethics board over her comments on immigrants and trans people (GOOGL, GOOG)

$
0
0

Sundar Pichai

  • Over a thousand Google employees have signed a petition calling for the removal of Kay Coles James, the president of the conservative Heritage Foundation think tank, from Google's recently announced artificial-intelligence ethics board.
  • James has previously made anti-trans comments, including calling transgender women"biological males."
  • Googlers Against Transphobia, the group behind the petition, wrote that for Google to appoint James to the ethics board "elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making."
  • Already, Alessandro Acquisti, a leading behavioral economist and privacy researcher, has stepped down from the board, saying he did not think it was "the right forum for me to engage in this important work."

Googlers are speaking out against a conservative appointee to its recently announced artificial-intelligence ethics board over her previous statements on transgender rights.

As of Monday, over a thousand Google employees had signed a petition calling for the removal of Kay Coles James, the president of the right-wing Heritage Foundation think tank, who has previously made anti-trans comments, including calling transgender women"biological males."

The group, known as Googlers Against Transphobia, wrote in its blog post announcing the petition that for Google to appoint James to the ethics board "elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making."

"This is unacceptable," the post said.

The group also described James as "anti-LGBTQ" and "anti-immigrant" for her previous public comments.

Google did not immediately respond to Business Insider's request for comment on the petition.

According to Googlers Against Transphobia, the person responsible for appointing James to the ethics board said her inclusion was meant to help foster "diversity of thought."

The group said that her addition "significantly undermines Google's position on AI ethics and fairness" and that because "the potential harms of AI are not evenly distributed," people "who are most marginalized are most at risk."

Google last Tuesday announced the AI ethics board, called the Advanced Technology External Advisory Council, as a way for the company to address difficult ethical decisions it faces with AI.

"This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," Google's senior vice president of global affairs, Kent Walker, wrote in a blog post announcing the board.

Controversy followed almost immediately after the Heritage Foundation president was named to the eight-person council, with some speculating that her addition was Google's attempt to appease conservative lawmakers who have accused the tech giant of anti-conservative bias.

Read more:Google set up an external ethics council for AI, but one of its members was called out for her views on trans people and immigration

Already, Alessandro Acquisti, a leading behavioral economist and privacy researcher, has stepped down from Google's ethics board, saying he did not think it was "the right forum for me to engage in this important work."

Numerous academics, among others, have also signed the petition calling for James' removal.

Got a tip? Contact this reporter via Signal or WhatsApp at +1 (209) 730-3387 using a non-work phone, email at nbastone@businessinsider.com, Telegram at nickbastone, or Twitter DM at @nickbastone.

SEE ALSO: Sundar's Silence: The thorny factors behind the Google CEO's Trump meeting and his deafening quiet about it

Join the conversation about this story »

NOW WATCH: Wearable and foldable phones are shaking up tech, making 2019 the year of weird phones


After an employee backlash, Google has cancelled its AI ethics board a little more than a week after announcing it (GOOG, GOOGL)

$
0
0

Sundar Pichai

  • Google has shut down its new AI ethics board.
  • It was only announced last week.
  • It ran into immediate controversy over its choice of members, with thousands of employees signing a petition against a member who made anti-trans comments.

Google has shut down its AI ethics board little more than a week after announcing it.

Vox reports that the Silicon Valley tech giant has scrapped the group, which had been intended to scrutinise the company's work on artificial intelligence to ensure the tech is ethically developed. A Google spokesperson confirmed the closure to Business Insider.

It was mired in controversy from the start, with thousands of Google employees up in arms about the inclusion of Kay Coles James, president of right-wing think tank Heritage Foundation.

Last Tuesday, Google announced the AI ethics board, called the Advanced Technology External Advisory Council, as a way for the company to address difficult ethical decisions it faces with AI.

"This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," Google's senior vice president of global affairs, Kent Walker, wrote in a blog post announcing it.

Controversy followed almost immediately after the Heritage Foundation president was named to the eight-person council, with some speculating that her addition was Google's attempt to appease conservative lawmakers who have accused the tech giant of anti-conservative bias. She has previously made anti-trans comments, including calling transgender women"biological males."

A group of Google employees, known as Googlers Against Transphobia, started a petition demanding her ouster, writing that for Google to appoint James to the ethics board "elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making."

The group said that her addition "significantly undermines Google's position on AI ethics and fairness" and that because "the potential harms of AI are not evenly distributed," people "who are most marginalized are most at risk." Nearly 2,400 Google employees subsequently signed the petition.

Alessandro Acquisti, a leading behavioral economist and privacy researcher, subsequently stepped down from Google's ethics board, saying he did not think it was "the right forum for me to engage in this important work." Numerous academics, among others, also signed the petition calling for James' removal.

In a statement, a Google spokesperson said: "It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics. "

Join the conversation about this story »

NOW WATCH: Aston Martin's new fully-electric Lagonda could be the future of SUVs

This robot can find Waldo in any 'Where's Waldo' puzzle as little as 5 seconds — here's how it works

$
0
0

where's waldo robot ai

  • One company built a robot that can find Waldo in any "Where's Waldo" puzzle faster than most humans probably could.
  • "There's Waldo" is a robot that uses computer vision and machine learning to spot Waldo.
  • The robot can find Waldo in as little as 4.5 seconds.
  • Here's how "There's Waldo" works:

Creative agency redpepper built a camera-mounted robotic arm and connected it to Google's machine learning service AutoML, which analyzes the faces on any given page to find Waldo.



If "There's Waldo" can find a face in the puzzle with 95% confidence or higher, it will move its mechanical arm to point at any and all Waldos on the page with its creepy rubber hand.



The robot arm, which is controlled by a tiny Raspberry Pi computer, was coded with Python to extend and take a photo of the "Where's Waldo" puzzle.



See the rest of the story at Business Insider

Google Cloud is taking on Amazon by moving into retail, and it's a first step in new CEO Thomas Kurian's master plan (GOOGL, AMZN)

$
0
0

Thomas Kurian

Part of new CEO Thomas Kurian's master plan for Google Cloud to improve its enterprise chops is to target specific industries, and now he's taken a first step in that direction.

On Wednesday, Google announced Google Cloud for Retail, a platform with tools meant to help retailers with predicting sales, recommending products with the help of artificial intelligence, and more. It's partnering with customers like Bed Bath & Beyond, Kohl's, Shopify, and Target — retailers that also compete with Amazon.

At a press briefing, Google Cloud engineers said it's the first time Google Cloud was launching an AI product to address a business process for a specific vertical.

Kurian also emphasized that Google Cloud was building capabilities to help companies in specific industries, such as healthcare, media, financial services, retail, and manufacturing.

"Our work in these industries would not be complete if we didn't build our enterprise capability,"Kurian said onstage Tuesday at Google Cloud's annual conference. "We at Google Cloud want to be the best partner. We believe we can do that in two important ways. The first way is bringing expertise to help you on that journey. The second is to be the easiest cloud provider to do business with."

New features

Google Cloud for Retail will include hosting capabilities, which can help during peak traffic times like Black Friday, as well as increased support for peak times. This is important because if a website crashes during Black Friday, it can hurt a company's revenue and brand.

In addition, it will include real-time inventory management and analytics capabilities to give retailers data on which products are in stock.

There are also search capabilities, including a mobile-phone feature allowing customers to take photos or screenshots of products they like and use them to search for similar items sold by a retailer. Finally, it will include product recommendations intended to help retailers deliver personalized recommendations to customers, based on their online behavior.

Read more: Under new CEO Thomas Kurian, Google Cloud is recruiting some of Amazon Web Services' fiercest critics into an expanded open source partnership

"As you know, retail is in the midst of major transformation," Ratnakar Lavu, a senior executive vice president and chief technology officer of Kohl's, said onstage on Tuesday. "As a retailer, we needed to innovate. We also needed to become more engineering focus. And guess what, Google has that same DNA."

SEE ALSO: New Google Cloud CEO Thomas Kurian says he's borrowing from the Oracle playbook to help catch up to Amazon and Microsoft

Join the conversation about this story »

NOW WATCH: Scientists completed one of the most detailed explorations inside the Great Blue Hole. Here's what they found at the bottom of the giant, mysterious sinkhole.

Robots could wipe out 1.3 million Wall Street jobs in the next 10 years

$
0
0

wall street bankers

  • Jobs in banking and the financial services industries continue to be the most popular in 2019.
  • Despite their popularity, a new report predicts that 1.3 million bank workers will lose their jobs or be reassigned due to automation.
  • Banks have already begun investing in artificial intelligence, and recognize the technology will displace workers.
  • Visit BusinessInsider.com for more stories.

Jobs in banking are some of the most sought after for job seekers — but plenty of roles may not be around much longer. 

Despite a year of scandals that entangled many of the country's largest banks, the desire to work at these companies remains high, according to a new report by LinkedIn. Some of the more high-profile scandals include Deutsche Bank's alleged involvement in a global money-laundering scheme and accusations against Well Fargo's auto-loan and mortgage practices.

Nonetheless, Bank of America, Goldman Sachs, Citigroup, Wells Fargo, and JPMorgan Chase remain five of the most popular places to work in 2019. LinkedIn attributes the popularity to banks offering increasingly tech-focused jobs that attract talented software engineers and developers out of college.

Read more: The 30 hottest companies of the year, according to LinkedIn

"The reality is that if somebody wants to learn finance and strategy, these banks are still the places to be trained and developed," Heather Hammond, co-head of the global banking and markets practice at Russell Reynolds Associates, told LinkedIn.

While job seekers may be flocking to banks at the current time, a new report revealed a million jobs in the industry could disappear in just over 10 years. Job losses or reassignments will impact 1.3 million bank workers in the US alone by 2030, according to a new report from British insights firm IHS Markit. Especially at-risk roles include customer-service reps, financial managers, and compliance and loan officers.

Though the most at-risk jobs seem to be lower-paying, jobs in banking as a whole are some of the most expensive in the country. Starting analysts make $91,000 in base pay, while managing directors can earn almost $1 million after bonuses. In fact, the industry could add a whopping $512 billion in global revenue by 2020 with the use of intelligent automation, according to a 2018 report from Capgemini.

While the use of AI remains sparse, and the technology is still basic, a boost in revenue will increase the adoption of automation, Business Insider analyst Lea Nonninger reports.

Unfortunately for job seekers, banks' investment into automation is well under way. In fact, a detailed 2018 report from Business Insider Intelligence noted that banks are already using AI to mimic bank employees, automate processes, and preempt problems. JPMorgan is cleaning thousands of databases to make room for machine learning tech. Citi president Jamie Forese said in 2018 that robots could replace as many as 10,000 human jobs within five years.

Laura Barrowman, chief technology officer at the Swiss investment bank Credit Suisse, revealed the company is already retraining employees whose jobs have been displaced by AI: "Globally, if you look at cyber skills, I think there is a deficit," Barrowman told Business Insider's panel at the World Economic Forum earlier this year. "There is such a shortage of skills, and you need people who have that capability."

SEE ALSO: AI will have a 'transformative' effect on Wall Street, according to a new report, putting 1.3 million finance jobs in the US at risk

Join the conversation about this story »

NOW WATCH: Kylie Jenner is the 'youngest self-made billionaire' and the wealthiest member of her family. Here's how she makes and spends her fortune.

This startup raised $28 million to give dangerous industrial robots eyes, and is predicted to be worth tens of billions

$
0
0

Patrick3

  • Veo Robotics has raised a total of $28 million for its technology that gives dangerous industrial robots eyes and allows them to operate alongside humans.
  • A prototype of the product will be rolled out in May to select customers, and Veo's investors, which include Google Ventures and Lux Capital Management, said the firm could one day be worth tens of billions of dollars.
  • Veo CEO Patrick Sobalvarro told Business Insider why the tech is like working with a trained animal and is a no-brainer for factories that produce fast-moving consumer goods.
  • Visit BusinessInsider.com for more stories.

If you've ever seen Amazon's robots whizzing across its warehouse floors, you'll know it's a sort of hypnotic dance, which ends up with someone's order being delivered that much quicker. The other thing about this spectacle is it all takes place behind a cage that separates man and machine.

The metal barrier is, of course, there for safety reasons — and like the Amazon fulfillment centers, it is the norm in factories with production lines building everything from cars and cell phones to refrigerators. Veo Robotics, which specializes in making industrial robots smarter, reckons about 3 million machines worldwide are kept well away from human contact to avoid catastrophe.

And that's where Veo's technology comes in. The company has raised a total of $28 million since it was founded in 2016 to develop proprietary technology that effectively gives robots eyes, allowing them to operate safely alongside human machine operators.

The product, a prototype of which launches in May, works using three-dimensional flash lidar cameras and AI, helping robots map the space around them and react when a human comes close. It's not dissimilar to the tech used in driverless cars, and it allows robots to share a work station with an employee, combining human touch with the super-human speed and strength of a machine — all while removing the element of danger.

You can watch the tech in action here:

Patrick Sobalvarro, the founding CEO of Massachusetts-based Veo, thinks the tech could be a game-changer.

"The key thing from our point of view — in terms of ergonomics, in terms of products — is that humans and machines need to work together fluidly and fluently," he told Business Insider. "Fundamentally, our technology allows industrial robots to track people. And from an environmental point of view, our tech provides a system which you as a manufacturer can set up in less than a day."

Yet for all the benefits Veo's technology may bring, it might also stir up some deep-seated fears. For example, the case for driverless cars is far from being won, particularly after a woman was killed by an Uber autonomous vehicle last year. Are Veo's robots really safe enough to be uncaged? Sobalvarro is adamant they are.

"You can't move fast and break things if those things are people."

"Safety is our primary ethical responsibility," he said, turning to a familiar Silicon Valley trope to illustrate his point.

"Mark Zuckerberg's motto is: 'Move fast and break things.' My cofounder Clara Vu, our head of engineering, came up with a riposte: 'You can't move fast and break things if those things are people.'

Read more:3 things we learned from Facebook's AI chief about the future of artificial intelligence

"Our safety system is fail-safe — the default is to emergency stop the robot if there are any internal faults in the system, so we will always fail to a safe state.

"We use dual-channel redundancy throughout our system, in the sensors themselves, as well as [in] our processor, which contains two independent computers, as well as a third safety processor to monitor them. We also use dual channel communications with the robot controller safety interfaces.

"This focus on safety extends to algorithms, too. Our system constantly analyzes occluded areas, places we can't see, and we won't let the robot move near them if we determine they could possibly contain a person."

Veo

So if the safety box is ticked, what's the business case?

"Today, 3 million robots are all behind cages, and that's very burdensome," Sobalvarro said. "Sometimes, you need to get permission just to get permission to open the cages, and that limits efficiency.

"From a financial perspective it costs $50,000 dollars per minute to stop a car production line — and yet there are 700 process steps to building a car. With current technology, workers sometimes have to stop after two hours and switch out because they're exhausted from heavy lifting. Our tech works best for helping build things like cars — complex objects with lots of components."

And commenting on the fear that robots will replace humans in the workplace, he added: "Our tech is designed to enhance the quality of work that human factory workers are able to do. The robots that use it will free up work for people; they won't take it away."

Veo's $28 million in backing suggests that the firm's investors, including Google Ventures and Lux Capital Management, share this vision of the commercial potential. Indeed, Lux partner Bilal Zuberi told Bloomberg that Veo could one day be worth tens of billions of dollars.

A prototype of its product will be rolled out to select customers in May. Veo would not disclose any further details, but it's confident the pick-up will increase dramatically as the technology proves itself. 

Veo's says robot tech is like working with trained animals

sheep

Sobalvarro said Veo's technology is a long way off making machines sentient beings, but he does liken its capability to a human working with a trained animal. A bit like a farmer using a horse to plough a field or a dog to round up cattle.

"If we believed we were making conscious machines, that would open up a whole new set of ethical problems. But we're nowhere near doing that. People conjecture about ways that machines could be conscious, but no one has ever demonstrated it or given good reason to think that they are," he said.

"When we say our technology enables robots to perceive, it's in the same way a Venus flytrap perceives. The robots that use our tech are like simple animals. We want them to treat humans in the same way draft animals treat farmers."

Convincing customers of this capability, and of the technology's safety, will be Veo's big task when it goes to market in May.

SEE ALSO: 8 industries robots will completely transform by 2025

Join the conversation about this story »

NOW WATCH: We tried Louis Vuitton's wireless earbuds to find out if they're worth the $995

Elon Musk says an update on his brain-computing interface is 'coming soon'

$
0
0

Elon Musk

  • Elon Musk hinted at what could be the announcement of a brain-machine interface that would hook human brains up to computers.
  • In response to a question asking for an update on Neuralink, a neurotechnology startup he founded in 2016, the SpaceX and Tesla CEO replied that new information would be "coming soon."
  • A "direct cortical interface," according to Musk, could allow humans to reach higher levels of cognition and give humans a better shot at competing with artificial intelligence.
  • Visit Business Insider's homepage for more stories.

SpaceX and Tesla CEO Elon Musk hinted at what could be the announcement of a brain-machine interface that could one day hook human brains up to computers on Sunday.

In response to a question asking for an update on Neuralink, a neurotechnology startup he founded in 2016, Musk replied that new information would be "coming soon."

Read more: Robots could wipe out 1.3 million Wall Street jobs in the next 10 years

A "direct cortical interface," according to Musk, could allow humans to reach higher levels of cognition — and give humans a better shot at competing with artificial intelligence, the Wall Street Journal reported in 2017. It's unclear, though, whether Neuralink's main objective is to do just that or to connect human brains to computers for consumer applications.

elon muskMusk has repeatedly warned of evil AI overlords in the past, saying that AI could become "an immortal dictator from which we could never escape" in a 2018 documentary called Do You Trust This Computer?

Most of what Neuralink is working on, including any plans for a brain computer interface, are still tightly under wraps. In one tantalizing clue, Bloomberg recently reported on a still unpublished academic paper by five authors who have been employed by or associated with Neuralink — though it's unclear whether Musk's tweet referred to their work.

Read more: This robot can find Waldo in any 'Where's Waldo' puzzle as little as 5 seconds — here's how it works

The paper describes a "sewing machine" for the brain in the form of a needle-like device that is inserted into a rat's skull to implant a bendable polymer electrode in the brain that would read the brain's electrical signals.

Of course, human trials are still a long time out. Neuralink has yet to comment on any possible timelines or announcements.

Join the conversation about this story »

NOW WATCH: Astronomers just captured the first image of a black hole. Here are the horrifying things that would happen if you fell into one.

Scientists created a never-ending livestream of AI-generated death metal — and it sounds deeply disturbing

$
0
0

death metal band

  • Relentless Doppelganger is a non-stop, 24/7 livestream churning out heavy death metal generated completely by algorithms.
  • It's the work of music technologists CJ Carr and Zack Zukowski, who have been experimenting for years on how to get artificial intelligence to produce recognizable music.
  • The deep learning behind the YouTube channel is trained on samples of a real death metal band called Archspire.
  • These real audio snippets are fed through a neural network to try and create realistic imitations.
  • Visit Business Insider's homepage for more stories.

Even if death metal isn't a perfect fit for you as far as music genres go, you have to admire the AI smarts behind Relentless Doppelganger— a non-stop, 24/7 YouTube livestream churning out heavy death metal generated completely by algorithms.

And this is by no means a one-off trick by Dadabots, the neural network band behind the channel: the project has produced 10 albums to date before this livestream even appeared.

We have to admit the computer-generated sounds of the livestream, all mangled lyrics and frenetic drum beats, sounds unnerving to us. Your mileage and musical taste may vary, but there's no doubting the impressiveness of the science behind it.

metalIt's the work of music technologists CJ Carr and Zack Zukowski, who have been experimenting for years on how to get artificial intelligence to produce recognizable music in genres like metal and punk.

"This early example of neural synthesis is a proof-of concept for how machine learning can drive new types of music software,"writes the pair in a 2018 paper. "Creating music can be as simple as specifying a set of music influences on which a model trains."

The deep learning behind the YouTube channel is trained on samples of a real death metal band called Archspire, hailing from Canada. These real audio snippets are fed through the SampleRNN neural network to try and create realistic imitations.

Like other AI-powered imitation engines we've seen, SampleRNN is smart enough to know when it's produced an audio clip that's good enough to pass for the genuine article — and as a result it knows which part of its neural network to tweak and strengthen.

Read more: This robot can find Waldo in any 'Where's Waldo' puzzle as little as 5 seconds — here's how it works

The more data that SampleRNN can be trained on, the better it sounds... or to be more accurate, the more like its source material it sounds.

"Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural," Carr told Jon Christian at the Outline back in 2017. "As it improves its training, you start hearing elements of the original music it was trained on come through more and more."

SampleRNN was originally developed to act as a text-to-speech generator, but Carr and Zukowski have adapted it to work on music genres as well. It's effectively trying to predict what should happen next based on what it's just played — sometimes making tens of thousands of predictions a second.

It can also go back to correct previous 'mistakes' — audio output that doesn't sound as it should do — but this only extends back a few hundred milliseconds. The result is the Relentless Doppelganger video.

Read more: After an employee backlash, Google has cancelled its AI ethics board a little more than a week after announcing it

The team behind the livestream thinks the fast and aggressive play of Archspire particularly suits their approach — in other words, were it applied to a different band, it wouldn't be quite as realistic.

"Most nets we trained made shitty music," Carr told Rob Dozier at Motherboard. "Music soup. The songs would destabilize and fall apart. This one was special though."

The project continues. If you like what you hear on the YouTube livestream, you can check out the neural network's other creations at the Dadabots site.

Join the conversation about this story »

NOW WATCH: Take a look inside a $3 million doomsday condo that can sustain 75 people for 5 years


BlackRock is quietly building a team of 30 data scientists to create a next-generation stock-lending platform

$
0
0

larry fink

  • BlackRock founded AI Labs last year to research artificial intelligence. Since then, it's expanded to a staff of 30.
  • The team continues to grow — the firm is looking for a senior data scientist, among other roles — and job postings indicate the kinds of projects the Stanford-advised group is tackling.
  • AI Labs' work includes building a next-generation stock-lending platform, working with alternative data sets, and automating rote tasks.
  • Visit Business Insider's homepage for more stories.

The world's largest asset manager is on a mission to automate and innovate through its growing artificial-intelligence team.

BlackRock founded a Palo Alto, California-based group called AI Labs last year, directed by the Stanford professor Stephen Boyd. Now, according to job postings reviewed by Business Insider, the 30-member team is tackling projects ranging from next-generation lending platforms to automating human tasks.

Read more:Artificial intelligence is transforming a $22.9 trillion investing strategy — but the cutting-edge technology comes with a new set of problems

AI Labs is set up to work on new capabilities, from ideas through execution, at the $6.5 trillion asset manager.

"There is a rich problem space for data scientists and engineers across all areas of the business including investments, sales, marketing, operations, product, UX, etc. and the potential to have large scale impact," one recent posting for a senior data scientist said. ("UX" refers to user experience.)

Current projects include building a dynamic pricing and auto-bidding engine for the security-lending business, a $1.7 trillion business that BlackRock has been active in since 1981, according to promotional materials.

AI Labs' staff is also working with alternative data sets to find useful signals. That's been a conundrum across asset management as experts say the booming space has been difficult to generate incredible returns in, despite new providers seemingly popping up daily.

In a blog post last year, Jody Kochansky, BlackRock's chief engineer, highlighted some of the ways artificial-intelligence techniques were already helping to sort through vast amounts of what he called "messy data." Investment professionals, he wrote, could begin to glean insights into areas including "the speed of construction in China, foot traffic into major department stores and sentiment picked up from thousands of online employee reviews."

Sign up here for our weekly newsletter Wall Street Insider, a behind-the-scenes look at the stories dominating banking, business, and big deals.

AI Labs is also tackling natural language processing — synthesizing information from financial reports, news, and contracts — as well as automation on repeatable tasks, as it tries to free staff members "to work on the tasks that require their human intelligence."

A BlackRock representative declined to comment further.

BlackRock is far from the only asset manager thinking about how to use artificial intelligence across its businesses. Established managers like Franklin Templeton are embedding data scientists in their investment teams, while startups like Pagaya are using artificial intelligence to reshape investment strategies.

Managers are eyeing artificial-intelligence techniques as one way to boost fund managers' performance, particularly as investors flee higher-revenue active funds for cheaper passive strategies, and as a way to cut costs by automation.

Not all managers are on board, however. In January, the chief technology officer at Cohen & Steers, a $55 billion alternatives-focused firm, told Business Insider that artificial intelligence was "right in the middle of a hype cycle," so the firm was focused on building up its internal infrastructure, staying away from AI.

Join the conversation about this story »

NOW WATCH: Warren Buffett, the third-richest person in the world, is also one of the most frugal billionaires. Here's how he makes and spends his fortune.

Google just showed off 4 major updates to its futuristic Lens technology that anyone who goes out to restaurants will love (GOOG, GOOGL)

$
0
0

google io 2019 6

Google announced new updates to its Lens technology during its Google I/O event on Tuesday.

Lens uses artificial intelligence to "see" what you see through your phone's camera app. With that vision, Lens can offer useful information about what you're seeing. 

The latest updates Google announced — which roll out later this month — are geared toward the experience of eating out at restaurants, where Google is looking to streamline some common obstacles people have. 

Check out some of the new updates coming to Lens:

SEE ALSO: LIVE: Watch Google's biggest conference of the year

You'll be able to point your phone's camera at a menu, and Lens will automatically highlight popular dishes.

If you tap on the dish, Lens will pull up photos of the dish that other people have taken. 



Pointing your phone's camera at a restaurant bill will calculate the tip and split the total between your friends.



Lens can show you a video of how to prepare a dish when you point your phone's camera at the recipe.



See the rest of the story at Business Insider

Watch this self-piloting drone effortlessly dodge a soccer ball being thrown at it in real time

$
0
0

Drone Obstacle Avoidance

  • A new video from the University of Zurich shows an autonomous drone ducking and dodging a soccer ball being thrown at it in real time.
  • The video is part of an experiment researching the effects of latency in perception on a robot's ability to navigate unfamiliar environments.
  • Visit Business Insider's homepage for more stories. 

Drones have been capable of avoiding stationary obstacles for years, but researchers at the University of Zurich are working to make them even better at dodging moving objects.

As part of an experiment, the researchers recently published a video that shows an autonomous drone ducking and dodging a soccer ball being thrown at it in multiple scenarios. 

The experiment was part of a project by Davide Scaramuzza's Robotics and Perception Group at the University of Zurich, which recently published a paper that studies the effects that latency in perception can have on how quickly robots are able to navigate through an unfamiliar environment.

To validate their analysis, the researchers equipped a drone with "event cameras" that would enable it to detect and dodge objects thrown in its path, according to their white paper. The project was first reported by IEEE Spectrum, the online magazine for IEEE, a professional organization focused on engineering and applied sciences.

In the video, the drone can be seen dodging a soccer ball thrown from various angles. In one shot, it subtly tilts to the side to avoid the ball in real time, while another experiment later in the video shows it zipping upward so that the ball can pass underneath it. The experiment used event cameras, which are special types of sensors that are very sensitive to motion and can respond to changes in a scene within microseconds, IEEE Spectrum reported.

With drones now being used for everything from agriculture to package delivery, the ability to detect and avoid moving obstacles in real time is sure to be crucial. Check out the video below:  

 

SEE ALSO: 3 things we learned from Facebook's AI chief about the future of artificial intelligence

Join the conversation about this story »

NOW WATCH: Watch Google's I/O 2019 event in 7 minutes

I've owned an Amazon Echo for over 3 years now — here are my 19 favorite features (AMZN)

$
0
0

amazon echo

Amazon's family of Echo speakers are some of the most popular gifts right now, so many people are activating their Echo units for the first time.

Here's what you need to know: These speakers, which can respond to either "Alexa,""Amazon," or even "Computer" (for those "Star Trek" fans out there), are extremely quick to respond, and understand your commands far better than any other device I've used.

Thanks to its excellent audio system, with seven microphones for listening and a 360º omni-directional audio grille for speaking, Amazon Echo works exceedingly well wherever I am in my home. I can hear it — and it can hear me — almost perfectly.

Amazon's Echo has completely transformed the way I live in my apartment. Take a look at some of my favorite ways I use it.

SEE ALSO: Apple's 2016 report card: Grading all the new hardware Apple released this year

"Alexa, what time is it?"

Honestly, the best use cases for Amazon Echo are the simplest ones. With the Echo, I don't need to bother searching for my phone just to get the time — you can ask for the time from anywhere in your house and get the answer immediately. It's a small thing, but it totally makes a difference when you're rushing in the morning.



"Alexa, how's the weather outside today?"

Again, it's a simple task, but it's way quicker and better than pulling out your phone and opening your favorite weather app. Amazon Echo will not only tell you the current temperature, but also the expected high and low temperatures throughout the day, and other conditions such as clouds and rain.



"Alexa, set a timer for 10 minutes."

Amazon Echo is the perfect cooking or baking companion because it's totally hands-free. When the timer's up, a radar-like ping will sound until you say "Alexa, stop."



"Alexa, play some Kanye."

Amazon Echo can play thousands of songs from Amazon's Prime Music catalogue, and Alexa works quickly to pause, skip, or change songs whenever you want.



"Alexa, play some chill music on Spotify."

Amazon Echo also works with Spotify Premium, in addition to Prime Music.

If you ask Alexa to "play Spotify," it'll play the last thing you were playing, picking up where you left off. But you can also ask it to play any artist or album in Spotify's extensive catalogue, or any of Spotify's fun playlists, including Discover Weekly, or any of the ones based on genres or moods. You can learn more about Spotify's integration with Amazon Echo here.



"Alexa, ask Uber to request a ride."

Need a ride to the airport? You can request an Uber car to pick you up from your residence just by asking Echo. Once you activate the Uber skill in the Alexa app, your Echo will let you know how far away the closest car is, and it'll even let you know if there's surge pricing before you accept the ride.



"Alexa, add garlic to the shopping list."

Whenever you run out of food, just tell Alexa to add stuff to the shopping list. Since my significant other and I both share an Amazon Prime account, the shopping list syncs to both our phones' Alexa apps so we can order items straight from Amazon, buy them later, or make last-minute changes that we'll both see. 



"Alexa, what's on my calendar?"

Since Amazon Echo can sync with your Google Calendars, you don't need to look through your phone to see your upcoming schedule. Just ask Alexa to hear details about your meetings and appointments that day.



"Alexa, set a daily alarm for 7:00 a.m."

Echo can wake you up in the morning — or get your attention at any other time of day — with over 10 different alarm tones. You can choose soothing sounds or something a little more attention-grabbing right from the Alexa app. And as of April, you can create recurring alarms that repeat daily, the same day every week, or just for the weekdays or weekends.



"Alexa, heads or tails?"

What should we eat for dinner? Where should our next vacation be? Sometimes the answers to these questions are best decided by a coin flip — and you don't need a physical coin if you have Alexa, who will flip a virtual one for you (I've tried it several times, it really does appear to be truly random).



"Alexa, read my Kindle book."

Amazon Echo can read your Kindle books aloud to you using the same text-to-speech technology it uses to read Wikipedia entries and news articles aloud. You can also tell Alexa to pause or skip chapters both forwards and backwards.



"Alexa, read my audio book."

Aside from Kindle books, Alexa can also read aloud any audiobooks you've purchased from Audible. You can ask Alexa to resume your audiobook, play different titles, skip chapters, restart chapters, or go back to a specific chapter.  



"Alexa, add 'buy a winter jacket' to my to-do list."

Just like shopping lists, you can sync to-do lists to your Alexa app(s) across your devices. You can add to-do items using Alexa and add notes in addition to completing or deleting items via the app.



"Alexa, what was the score of the last Cavaliers game?"

It's actually much easier and faster to ask Alexa sports scores instead of visiting a sports website or app and looking through all the box scores to find the one you want.



"Alexa, find me a good Italian restaurant nearby."

Amazon Echo can help cure any craving you might have. Just ask Alexa to find a nearby restaurant from any cuisine you like, and Alexa will start listing off restaurants. You can learn more details about those restaurants in the Alexa app.



"Alexa, what's in the news?"

After you first set up your Echo, you can choose which news sources you want to get information from, like NPR, ESPN, and local radio stations. After that, if you ask Alexa for the news or a flash briefing, you can hear news straight from those sources. You can always change which news sources you want through the Alexa app.



"Alexa, set a sleep timer for 60 minutes."

If you like listening to music, podcasts or audiobooks before you fall asleep, you can start listening to any of those and simply ask Alexa to set a sleep timer. Now you don't have to worry about turning off the Echo if you start feeling drowsy.



"Alexa, ask Capital One to pay my credit card bill."

Amazon has a handful of skills that allow you to access your banking information, which were made by companies like Capital One. Yes, you can use Echo for banking purposes: You can check your recent transactions, pay your credit card bills, and more. It’s also secure: You can create a 4-digit “personal key” so every time you ask a banking related question, you’ll be prompted to recite your personal key before you’re given any access.



"Alexa, ask KidsMD about vomiting."

Another Amazon Echo skill you can download, developed by folks at Boston Children's Hospital, can field and answer your questions about illnesses, especially if you have children. With the KidsMD skill enabled, you can ask if symptoms warrant a call or doctor's visit, or find out dosing information about ibuprofen or acetaminophen.



This only scratches the surface.

Amazon Echo has tons of built-in features, and it's constantly getting better all the time. Amazon continues to add new features and outside developers continue producing "skills," or specialized applications that give you neat features when activated — and Amazon rounds up all the new features and skills in a neat little email you'll get every Friday.

The ongoing evolution of Amazon Echo is one of the best reasons to own this device. You can buy an Amazon Echo here.



Facebook has finally revealed what its secretive robotics division is working on, and it could spark competition with rivals like Google and Apple (FB)

$
0
0

Facebook Robot 1

  • Facebook for the first time has revealed the projects that its robotics division is working on.
  • The company hopes the projects will eventually lead to breakthroughs in artificial intelligence that can be applied more broadly.
  • It's also an effort by Facebook to establish itself as a leader in artificial intelligence, a title that rivals like Google and Apple, among others, are also vying for.
  • Visit Business Insider's homepage for more stories.

Facebook has publicly spoken about its interest in robotics in the past, but on Monday the company finally shared details regarding the specific projects it's working on.

The social-media giant has unveiled three robotics projects that it hopes will contribute to solving the challenge of building artificial-intelligence systems that don't have to rely on large quantities of labeled data to learn new information. The company is conducting research aimed at teaching robots how to learn about the world similar to the way humans do.

"The real world is messy, it's difficult," Roberto Calandra, a research scientist in Facebook's AI division, said when speaking with Business Insider. "The world is not a perfect place; it's not neat. So the fact that we are trying to develop algorithms that work on real robots [will] help to create [AI] algorithms that, generally speaking, are going to be more reliable, more robust, and that are going to learn faster."

The projects are housed within the company's artificial-intelligence research division, a section of the company that operates independently of its suite of popular apps and services. The company says its robotics and artificial-intelligence research are being used to advance AI at an industry-wide and academic level and that its computer scientists are not conducting research for the purpose of incorporating the technology into its user-facing products.

In one such project, Facebook is developing algorithms meant to reduce the amount of time it would take for a six-legged robot to learn how to walk, even if the machine has no information about its environment.

"The idea is that hopefully we can obtain performance where the robot, without any prior knowledge about the world or what it means to walk, can learn to walk in a natural way within a few hours," Calandra said.

Facebook robot 3

As part of another project, Facebook found that using curiosity as a motivator could help robots learn more quickly, drawing a parallel to the way humans learn. The company says it has applied this curiosity-driven technique to applications using a robotic arm as well as in simulations.

The third area Facebook is looking at involves helping robots learn about the world through touch. The company developed a method enabling robots to accomplish tasks by learning through touch without being given any specific training data. Part of why this is still such a challenge for machines is that it is difficult to build hardware with the right touch-enabled sensors, according to Calandra.

"As humans, whenever we grasp our cup or reach for our phones, we're actually very good at perceiving objects and understanding how to manipulate through touch," he said. "For robots, at the moment, the hardware is difficult to produce."

Speeding up the process

Such advancements could be crucial for speeding up the development of artificial-intelligence systems. In many cases, AI systems must be trained with labeled data to function. For example, if you want to create an algorithm that can detect cats in photos, it must understand what a cat looks like by trawling through vast amounts of data, such as thousands of photos labeled as having cats in them.

But labeling data is time-consuming, and in some cases the necessary data may not be available. This is why Facebook's researchers are hoping their robotics-oriented projects will lead to algorithms that can learn about the world similar to the way humans do.

Franziska Meier, also a research scientist for Facebook's AI group, points to natural language processing as one use case in which the required data may not be available. "You might want to translate between different languages and have a lot of data for some languages but not the others," she told Business Insider.

Facebook robot 2

It's also an effort by Facebook to establish itself as a leader in artificial intelligence, a title that rivals like Google and Apple, among others, are also vying for. Facebook announced last year that it had hired prominent computer scientists from Carnegie Mellon University to bolster its AI and robotics efforts, while Apple revealed in December that it had poached a top Google executive to head up its machine learning and artificial intelligence strategy.

Top Silicon Valley firms like Facebook are investing more heavily in AI, as the industry is expected to boom. In April 2018, the market-research firm Gartner estimated that the business value derived from artificial intelligence would reach $1.2 trillion for the year, representing a 70% increase from 2017.

SEE ALSO: Watch this self-piloting drone effortlessly dodge a soccer ball being thrown at it in real time

Join the conversation about this story »

NOW WATCH: 5G networks will be 10 times faster than 4G LTE, but we shouldn't get too excited yet

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>