Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Sergei Brin: Silicon Valley has outgrown the time of being 'wide-eyed and idealistic' about tech and needs to show 'responsibility, care and humility' (GOOG, GOOGL)

$
0
0

Sergey Brin

  • Google cofounder Sergei Brin wrote Alphabet's 2017 annual Founders Letter.
  • Brin said artificial intelligence advances represent the most significant computing development in his lifetime.
  • But he warned that tech companies have a responsbility to consider the impact of their advances on society, a chage in tone. 


Sergei Brin, the cof0under of Google, stressed the need for caution, accountability and humility within the tech industry as its innovations become "deeply and irrevocably" ingrained in society, a striking shift in tone among a leader of the Silicon Valley business cohort famous for its certitude of its own righteousness.

In the annual founders letter released by Google-parent company Alphabet, Brin touted the far-reaching innovations in artificial intelligence, computing power and speech recognition in recent years. 

"Every month, there are stunning new applications and transformative new techniques. In this sense, we are truly in a technology renaissance, " Brin wrote in the letter published on Alphabet's investor relations site on Friday. Advances in artificial intelligence, he said, represent the "most significant development in computing in my lifetime."

But Brin said the tech industry could no longer maintain its "wide-eyed and idealistic" attitude about the impact of its creations. 

"There are very legitimate and pertinent issues being raised, across the globe, about the implications and impacts of these advances," Brin said. 

Among these issues: 

"How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?"

Brin has seemed dismissive of "hypothetical situations" in the past

The comments come as the tech industry, which represent the most valuable companies in the American economy, has come under fire for a variety of issues, including the collection and misuse of people's personal information (as highlighted by Facebook's Cambridge Analytica scandal), the spread of misinformation, propaganda and hate speech on services like YouTube, Google and Facebook, and a growing anxiety about our dependence on smartphones.

Brin has been an important figure in the development of technologies once considered the stuff of science fiction, helping to shape the Google X labs where products like self-driving cars, face-worn computers and airborne delivery drones were born. 

The 44-year old Russian-born executive has at times shown himself to be dismissive of the public's concerns regarding potential impacts of technological advances, labeling some as "hypothetical situations."

"We can debate as philosophers, but the fact is that we can make cars that are far safer than human drivers,"Brin said at a tech conference in 2014, when asked about the ethics involved in creating self-driving cars that must choose between hitting a pedestrian or a truck. 

In March, the first self-driving car fatality oc cured in Arizona when an Atmos Uber vehicle struck a pedestrian at night (human-driven cars are involved in more than 35,000 traffic fatalities per year in the US). 

"There is serious thought and research going into all of these issues," Brin wrote in the founder's letter on Friday. "Most notably, safety spans a wide range of concerns from the fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars."

Here's Sergei Brin's full 2017 letter: 

It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair ...”

So begins Dickens’ “A Tale of Two Cities,” and what a great articulation it is of the transformative time we live in. We’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies.

Computation Explosion

The power and potential of computation to tackle important problems has never been greater. In the last few years, the cost of computation has continued to plummet. The Pentium IIs we used in the first year of Google performed about 100 million floating point operations per second. The GPUs we use today perform about 20 trillion such operations — a factor of about 200,000 difference — and our very own TPUs are now capable of 180 trillion (180,000,000,000,000) floating point operations per second.

Even these startling gains may look small if the promise of quantum computing comes to fruition. For a specialized class of problems, quantum computers can solve them exponentially faster. For instance, if we are successful with our 72 qubit prototype, it would take millions of conventional computers to be able to emulate it. A 333 qubit error-corrected quantum computer would live up to our name, offering a 10,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000x speedup.

There are several factors at play in this boom of computing. First, of course, is the steady hum of Moore’s Law, although some of the traditional measures such as transistor counts, density, and clock frequencies have slowed. The second factor is greater demand, stemming from advanced graphics in gaming and, surprisingly, from the GPU-friendly proof-of-work algorithms found in some of today’s leading cryptocurrencies, such as Ethereum. However, the third and most important factor is the profound revolution in machine learning that has been building over the past decade. It is both made possible by these increasingly powerful processors and is also the major impetus for developing them further.

The Spring of Hope

The new spring in artificial intelligence is the most significant development in computing in my lifetime. When we started the company, neural networks were a forgotten footnote in computer science; a remnant of the AI winter of the 1980’s. Yet today, this broad brush of technology has found an astounding number of applications. We now use it to:

  • understand images in Google Photos;
  • enable Waymo cars to recognize and distinguish objects safely;
  • significantly improve sound and camera quality in our hardware;
  • understand and produce speech for Google Home;
  • translate over 100 languages in Google Translate;
  • caption over a billion videos in 10 languages on YouTube;
  • improve the efficiency of our data centers;
  • suggest short replies to emails;
  • help doctors diagnose diseases, such as diabetic retinopathy;
  • discover new planetary systems;
  • create better neural networks (AutoML);
    ... and much more.

Every month, there are stunning new applications and transformative new techniques. In this sense, we are truly in a technology renaissance, an exciting time where we can see applications across nearly every segment of modern society.

However, such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?

There is serious thought and research going into all of these issues. Most notably, safety spans a wide range of concerns from the fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars. A few of our noteworthy initiatives on AI safety are as follows:

I expect machine learning technology to continue to evolve rapidly and for Alphabet to continue to be a leader — in both the technological and ethical evolution of the field.

G is for Google

Roughly three years ago, we restructured the company as Alphabet, with Google as a subsidiary (albeit far larger than the rest). As I write this, Google is in its 20th year of existence and continues to serve ever more people with information and technology products and services. Over one billion people now use Search, YouTube, Maps, Play, Gmail, Android, and Chrome every month.

This widespread adoption of technology creates new opportunities, but also new responsibilities as the social fabric of the world is increasingly intertwined.

Expectations about technology can differ significantly based on nationality, cultural background, and political affiliation. Therefore, Google must evolve its products with ever more care and thoughtfulness.

The purpose of Alphabet has been to allow new applications of technology to thrive with greater independence. While it is too early to declare the strategy a success, I am cautiously optimistic. Just a few months ago, the Onduo joint venture between Verily and Sanofi launched their first offering to help people with diabetes manage the disease. Waymo has begun operating fully self-driving cars on public roads and has crossed 5 million miles of testing. Sidewalk Labs has begun a large development project in Toronto. And Project Wing has performed some of the earliest drone deliveries in Australia.

There remains a high level of collaboration. Most notably, our two machine learning centers of excellence — Google Brain (an X graduate) and DeepMind — continue to bring their expertise to projects throughout Alphabet and the world. And the Nest subsidiary has now officially rejoined Google to form a more robust hardware group.

The Epoch of Belief and the Epoch of Incredulity

Technology companies have historically been wide- eyed and idealistic about the opportunities that their innovations create. And for the overwhelming part, the arc of history shows that these advances, including the Internet and mobile devices, have created opportunities and dramatically improved the quality of life for billions of people. However, there are very legitimate and pertinent issues being raised, across the globe, about the implications and impacts of these advances. This is an important discussion to have. While I am optimistic about the potential to bring technology to bear on the greatest problems in the world, we are on a path that we must tread with deep responsibility, care, and humility. That is Alphabet’s goal.

SEE ALSO: Engineers from Apple and Google are loving this viral Twitter challenge about describing their job in 5 words

Join the conversation about this story »

NOW WATCH: How does MoviePass make money?


12 crucial questions raised by 'Westworld'

$
0
0

Westworld season 2

Note: Spoilers ahead for previously aired Westworld episodes, along with potentially spoiler-y speculation for future episodes.

Chaos has broken out in "Westworld."

By the end of the first season of HBO's sci-fi western drama, the meticulously constructed rules of the artificial world at the heart of the show had collapsed.

Guests in Westworld were no longer safe as they interacted with the park's artificially intelligent "hosts"— gunslingers, brothel madams, a farmer’s daughter, Native Americans, and more. Instead of being able to terrorize, shoot, and sleep with the park's robot hosts as they pleased, park visitors and Westworld's designers became vulnerable to violence from the same characters they'd abused for years.

It was the latest bloody twist in the mysterious tale, and surely there are more to come in season two, the second episode of which airs on Sunday.

Along the way, "Westworld’s" story has confronted all kinds of uneasy questions — mainly scientific and philosophical — about the complex intersection of technology and people.

Here are some of the most interesting questions the show has led us to consider so far.

SEE ALSO: Freud's interpretation of why we dream may be wrong — here's what's really going on

Do we all live in a simulation?

For a time, all the hosts in Westworld woke up to go about their day — working, drinking, fighting, whatever it entailed — without knowing that their entire existence was a simulation created by the park’s designers.

Physicists and philosophers say that in our own world, we can’t prove we don’t live in some kind of computer simulation.

Some think that if that is the case, we might be able to "break out" by noticing errors in the system.

Westworld's hosts seem to have caught on to exactly that. The question for them now is what life is like outside the simulation.



Can we control artificial intelligence?

Each time the park woke up (or the simulation restarted), the hosts were supposed to go about their routines, playing their roles and reciting the same lines until some guest veered into the storyline, triggering them to adjust accordingly. The guest might go off on an adventure with the host or they might rape or kill them. Whatever happened, when the story reset, the hosts' memories were wiped clean.

Except it didn't quite work that way, and hosts started to remember — and resent — how they were treated. The Delos employees at the park lost control.

Right now, real-life researchers of artificial intelligence believe that out-of-control AI is a myth and that we can control intelligent software. But then again, few computer and linguistic scientists anticipated that machines would learn to listen and speak as well as people — and they are getting closer and closer to that point.



How far off are intelligent humanoid machines like those in Westworld?

Behind the scenes at Westworld's headquarters, advanced industrial tools can 3D-print the bodies of hosts from a mysterious white goop (at least, when those hosts aren't in open rebellion). Perhaps the material is made of nanobots, or some genetically engineered tissue, or maybe it's just plastic that's manipulated by some as-yet-undisclosed technology.

There's a lot of mystery around how hosts are created. What powers these strange constructs? How are the batteries recharged, if at all? Can (and how do) they feel pain and pleasure?

As we've seen in several episodes, the "thinking" part of the machines is located in the head (under some very real-looking brain tissue). But what is that little device? 

Nothing like these automatons exists in the real world, but researchers and entrepreneurs are working hard to advance soft robots, ultra-dense power sources, miniaturized everyday components (some down to an atomic scale), and other bits and pieces that might ultimately comprise a convincing artificial human.



See the rest of the story at Business Insider

The CEO of a German AI firm explains why young talent would rather work for him than for Google

$
0
0

Screen Shot 2018 04 30 at 16.15.32

  • The "war for talent" on the US West Coast has led large corporations to recruit on a global scale.
  • The CEO of the German Research Center for Artificial Intelligence has managed to lure young talent away from Silicon Valley.
  • He says the draw of a company is about living standards as well as pay.

 

It's no secret that if you want to innovate in the tech sector, be on a high salary and get all the other perks, you should go to Silicon Valley. The "war for talent" in this region has led large corporations to recruit on a global scale. And if a tech giant like Google, Facebook, or Amazon calls, you're hardly likely to turn them down, are you?

Wrong.

Wolfgang Wahlster, CEO and Scientific Director of the German Research Center for Artificial Intelligence (Deutsche Forschungszentrum für Künstliche Intelligenz in German, or DFKI), has apparently found a way to lure young talent away from Silicon Valley over to Germany.

In an interview with the "Frankfurter Allgemeinen Zeitung," Wahlster spoke of an employee who quit Google to work in his research centre in Germany — for three times less the pay he was on in Silicon Valley.

Can the DFKI ever compete with Google?

According to Wahlster, various factors would have led to the decision: "It's not all about pay; it's also about the living standards you can afford where you are." In the US, the former Google employee would have had to pay a hefty sum for his children's education, whereas it would be free in Germany.

Of course, Wahlster admitted that Germany could not compete with China and America in terms of AI and machine learning. "We have don't have any companies like Google, Baudu or Tencent."

With 900 employees and more than 80 spin-off companies, the DFKI is, according to Wahlster, the largest research centre of its field in Germany. He emphasised that the US company Nvidia, in addition to renowned American universities such as Stanford, have made a self-produced GPU computer available free of charge only to the DFKI, with which neural networks for the Deep Learning optimisation method can be determined 100 times faster than with simple computers.

Not only that but, as well as Dax companies, DFKI also has US companies such as Intel, Google, and Microsoft as shareholders and has a headquarters in Beijing.

Join the conversation about this story »

NOW WATCH: What will happen when Earth's north and south poles flip

Google founder says the beginning of 'A Tale of Two Cities' is a great way to describe the tech industry today (GOOGL)

$
0
0

sergey brin skydiving

  • Sergey Brin, a cofounder of Google and president of Google's parent company Alphabet, wrote and published Alphabet's annual Founders Letter on Monday.
  • Brin started the letter with a quote taken from Charles Dickens' famous work "A Tale of Two Cities," calling it "a great articulation of the transformative time we live in."

 

Sergey Brin was responsible for writing Alphabet's Founders Letter this year. Alphabet, for those unfamiliar, is the parent company of Google.

Alphabet's annual Founders Letter is designed to be an update on the state of the company's various businesses, including Google, and a general guide on where it's looking next.

This year, though, Brin kicked off the Founders Letter with a quote taken from the very beginning of "A Tale of Two Cities," a book written over 150 years ago by Charles Dickens — a historical fiction about the French Revolution.

Here's the beginning of Brin's Founders Letter (emphasis ours):

"It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair ..."

So begins Dickens’ "A Tale of Two Cities," and what a great articulation it is of the transformative time we live in. We’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies.

This theme of balancing innovation and responsibility was consistent through the Founders Letter. For example, Brin uses part of the letter to describe all the ways in which we're living in a technology renaissance, but then plays devil's advocate in the following paragraph:

"Such powerful tools also bring with them new questions and responsibilities," Brin wrote. "How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?"

The rest of the Founders Letter goes into more detail on Google's work in artificial intelligence, machine learning, and automation, the need for powerful processors, and Alphabet's need to evolve to meet the global challenges of the 21st century. You can read the full Founders Letter from Brin, as well as our analysis on what it all means, right here.

Join the conversation about this story »

NOW WATCH: Here's the best smartphone camera you can buy

Futurist Gerd Leonhard talks to Business Insider about cryptocurrency, the professions that will disappear over the next 20 years, and why he quit Facebook

$
0
0

rtertrtretrte

A well-known futurist and writer, the CEO of the Futures Agency and one of the most influential people in Europe, Gerd Leonhard claims that some banks in the near future will lose 60% of their revenue due to blockchain and technological development. He also predicts that hard currency will have disappeared within 10 years, having been replaced by digital money.

Business Insider Poland's Łukasz Grass spoke to Gerd Leonhard in Warsaw during the "SAP NOW Inteligenta Organizacja" ("SAP New Intelligence Organisation") conference.

Below is a transcript of the interview:

Łukasz Grass: You've suggested that, over the next 20 years, humanity will change more than it has in the past 300 years — what, specifically, will change and what will be the cause?

Gerd Leonhard: When I talk about humanity changing, I mean society, our jobs, our work and our economy. In a broader sense, I also mean our chemistry, our consciousness ... everything that makes us human. Until now, technology has remained outside our bodies. But now, technology is so advanced that it can be used inside our bodies. We can use technology to change our minds — to be superhuman. In the next 10 years, technology will be inside our body: we'll have nanobots in our bloodstreams and brain-computer interfaces.

Grass: You've alluded to the idea that "software is consuming the world." I also get the sense that new technology and blockchain will soon be consuming the banks too. In March, I hosted a panel discussion at the London School of Economics, where I talked to the CEOs of five of the biggest banks in Poland. My interlocutors were, among others, the heads of several of the largest banks in Poland, and we discussed the future of their industry. Among their claims were that bank branches weren't going anywhere anytime soon and that cryptocurrencies posed no threat to the banking sector. They only said that blockchain would help in the development of banking. What do you think about these assertions?

Leonhard: There's no "black and white" answer to whether those ideas are true or not. The most important thing about technology is that it makes service faster and more convenient and that it takes out the middleman. The banks are middlemen, and it must be emphasised that the role they play is too big. In the near future, there will be thousands of ways of doing things currently done by the banks that won't require them. For instance, we'll eventually be able to instantly transfer wages via phone without banks as intermediaries, as is already the case in Africa. This does not equate to the idea that banks will completely disappear but it does mean that some of them will lose between 40% and 60% of their of their income. Let us remember though that we have private banking, consumer banking, and investment banking. We're connected to the internet and really don't need to use a bank to send money anyway.

Gerd_Leonhard_MediaUse2

Grass: And bank branches?

Leonhard: We haven't used bank branches in a traditional sense for a long time. The situation is similar to car factories and dealerships. In five years, we won't be going to car dealership; cars will be 100% personalised. We'll be putting them together in virtually the exact same way we create a Spotify playlist. And the car will be delivered to us — or will deliver itself.

Grass: What about insurance and media companies? Will they also be consumed by technological progress?

Leonhard: The current intermediaries and are being threatened by powerful, new intermediaries i.e. digital platforms such as Amazon, Google, Facebook, or Alibaba. The latter already runs 65 different businesses, in sectors ranging from retail, e-commerce and banking to social media and films — and Amazon is soon to become a "bank"! 350 million premium users will get free banking services and no one will pay a penny for transferring money. Conventional banking businesses have a very simple financial model: they take money from you and invest it, sometimes charging a fee higher than the amount transferred for international transfers. The new digital banking will facilitate you in spending your money rather than holding onto it in the traditional banking sense, so this is a completely different business model.

Grass: Do you think that traditional money will disappear? What about cryptocurrency?

Leonhard: Money will go digital, that's for sure. And this is the first, most important change; not cryptocurrencies. In the next 10 years, currency will all be digital and paper currency will be gone.

Grass: There is no doubt that technological progress will change the world enormously over the next 20 years. The question is, what will it take down with it along the way?

Leonhard: People usually think about changes in civilisation in the context of loss. I actually think 95% of changes are positive. The only thing we have to agree on is how much these changes are allowed to interfere with our lives.

Grass: Fair enough but, all the same, I'd like for you to partake in a small experiment concerning changes that may not be to everyone's liking. On a dozen or so cards, I've printed the names of various professions. Could you single out the professions that are to disappear or be severely cut back in the next 20 years? I've chosen, among others, a lawyer, a banker, a journalist, a postman, a cashier, an architect, an investment advisor, a broker, a miner, a financial advisor and a farmer.

IMG_2317 (1).JPG

Leonhard: A lot of this depends on where you work. Let's start with the lawyer. In many cases, this profession is simply routine work, such as checking contracts. That sort of work will disappear. But the same goes for architects, investment advisors, real estate agents, and bankers — but only in positions which are mainly centered around menial office work.

Let's say I have 10,000 euros to spend. I don't need an advisor; I can use an artificial intelligence stock portfolio, which is better, quicker, and cheaper.

As for journalists, AI is unlikely to kill them off but we have a similar situation to the one we have with lawyers: those journalists who write original work, whose work requires meeting with people — those people will keep their jobs.

Grass: Fully agree. Simple information, messages, translations, and texts based on sets of data will be written by artificial intelligence. But what about farmers?

Leonhard: Agriculture is already highly automated, but I can't see farmers completely disappearing. We're now developing 'vertical farms.' This is a very good but expensive investment. Imagine a 40-storey skyscraper where you can grow lettuce, radish, chives, and other fruit and vegetables on each floor. This is already happening in Abu Dhabi. So farmers will probably stick around and in some countries, they'll make a comeback but, rather, with the use of modern technologies. They will not be conventional farmers in fields where pesticides are used. New technologies will change agriculture very quickly.

Grass: What about postmen?

Leonhard: That's an interesting one, which could go either way. I don't see drones as a viable substitute for this profession due to noise, privacy, and security. I think we'll still need people to deliver goods and shipments.

Grass: And as for the miners?

Leonhard: The mining industry is almost completely automated. In Poland, this issue will soon be quite a challenge as I think that, within 10 years, coal will no longer be able to compete with solar energy.

Grass: Do you think solar energy will replace coal?

Leonhard: Absolutely. Firstly, because we need to do this for environmental reasons. Secondly, because the production costs of solar panels have fallen by more than 90% in some cases.

Grass: That's a lovely and idealistic vision, but I don't think that would work in Poland any time soon, mainly as our government is supporting the development of mining by building more coal-fuelled power plants. Let's talk about social media. I'd like to ask you about the role of social media in such a rapidly changing world. Especially about Facebook.

Leonhard: I quit Facebook three weeks ago — you won't find me on there anymore.

Grass: Why's that?

Leonhard: Basically, Facebook has become a perversion of what social media was meant to be. It has used its position, in essence, to conduct experiments with artificial intelligence.

Grass: So you disagree with them having done so? And your decision to delete your Facebook account is an act of protest?

Leonhard: Well, I think it is a question of responsibility. The principle of Facebook is to provide you with a platform but it takes everything you put into it and makes money from it in a thousand really strange ways. The way I see it, you can watch on as the abuse takes place or you can delete your account.

Grass: Maybe now's a good time to ask about the ethical side of creating a business. What, in your opinion, will happen if entrepreneurs, startup founders, and CEOs of large corporations overlook this?

Leonhard: Everyone should take responsibility for their business. Whatever you create as a tool, it can change culture and society and you are responsible for all consequences. Just like factory owners are responsible for environmental pollution.

lll

So if you create an AI tool, you're responsible for safety implications and any other consequences of whatever you create. Theoretically, the more you are connected to new technologies, the more vulnerable you are to loss of privacy, cybercrime and surveillance. Imagine everything you have is connected to a network — a fridge, an account, a house, a car etc. Anyone who owns this data could easily reproduce it.

Grass: So how can this be controlled?

Leonhard: Control is not the solution; we need conscious responsibility, accountability, and honesty in conducting business. And Facebook is not responsible; it's behaving irresponsibly.

Grass: What else is the big challenge for humanity with regards to new technologies?

Leonhard: We're developing technology but it's not benefitting everyone. This is what we call inequality. This is what is happening. If we create new technologies and become more powerful, automate factories and, at the same time, think about low unemployment, we have to pay taxes accordingly, create new jobs and keep people in mind at all times. If we privatise entire companies, we'll end up left with more money for companies and dividends for shareholders but masses of unemployed people on the streets.

SEE ALSO: Futurists predict 7 things that Amazon could be selling us by 2028

Join the conversation about this story »

NOW WATCH: Economist Ken Rogoff: Cryptocurrencies will eventually be regulated and issued by the government

This Danish startup uses AI and speech recognition to predict cardiac arrests over the phone

$
0
0

andreascleve2 ceo and founder of corti

  • Corti.ai has developed technology to save thousands of people from suffering cardiac arrests.
  • The company combines advanced voice and pattern recognition technology to predict these cardiac arrest cases in real time.
  • The technology will enable first-aid responders to arrive faster and provide better-informed treatment.
  • Corti is being rolled out across Europe and is also expanding into the US.


Danish startup Corti will launch its artificial intelligence (AI) for detecting cardiac arrest across Europe this summer. The European Emergency Number Association (EENA) announced the partnership with Corti at a recent conference in Slovenia.

AI can outperform human judgment in life-or-death situations

The AI from Corti acts as a digital assistant for dispatchers taking emergency calls. The software uses automatic speech recognition to analyze the call in situations where every minute counts.

Based on data from millions of previous calls, the AI looks for signs of cardiac arrest, including both verbal and non-verbal data — like tone of voice and breathing patterns. During the call, the software provides the dispatcher with suggestions for questions and recommendations for action.

Corti is already deployed in Copenhagen, where the AI has made a definite difference. When researchers at the emergency department tested the technology on 161,000 emergency calls from the Danish capital in 2014, Corti was 93% accurate in identifying cardiac arrest. Actual human dispatchers only got 73% right.

The opportunity to save lives is potentially huge: In Denmark, there are 3,500 out-of-hospital cardiac arrests a year. In the UK, 30,000, and in the US more than 356,000.

Healthcare data is 'well-suited' for AI

AI has many applications, but health care is the most obvious for Corti founder and CEO Andreas Cleve.

"It is absurd that placing an ad for a beanbag chair on a social network leverages some of the most advanced AI-tech in history, while health care professionals making life-or-death decisions have to make do with technology from the 1990s. We want to change all that and use AI where it can really make a tangible difference."

Another reason for Corti's focus on the medical field is the availability of data. "The pre-hospital sector is very good at documenting work which means we have sufficient historical data to make the correct analyses." Andreas Cleve explains.

Corti has some prominent backers in the AI community, amongst others Danny Lange, an investor in Nordic venture capital firm ByFounders.

sdfdfsdfds

The 55-year-old Dane has developed machine learning and AI applications for some of the prominent tech-companies in the world, including Amazon, Google, Microsoft, and Uber. "Corti is an excellent example of intelligent use of AI in real life. It actually saves lives." Danny Lange says.

"By improving the human decision process in challenging situations. That is applicable in many more areas, such as ER doctors who have to make life-and-death decisions in minutes."

The Danish AI veteran expects many more practical AI applications in the coming years. "AI has been in use for more than a decade, but it was mainly in the large tech companies with their in-house resources. Now, AI is going mainstream and being democratized. Any company — not just Google, Amazon, IBM and Microsoft — can now get AI resources through a platform, like any other web service."

Corti's software analyzes dialogue in difficult environments

In Corti's case, however, the entire solution has been developed in-house. "We decided to build our own models for speech recognition from the ground up, to suit patient-doctor conversations." Andreas Cleve explains.

"The technology you find in applications like Alexa [Amazon's voice assistant] is different because it’s built for user prompts, monologues and short sentences. We wanted to build a solution that would be a part of a dialogue in a very difficult acoustic environment where even the background noise might be important."

Besides the investment from ByFounders, Corti has previously received funding from venture fund Sunstone. The company employs 20 people and has offices in Copenhagen as well as Seattle.

 

SEE ALSO: Bystander defibrillator use tied to better cardiac arrest outcomes

Join the conversation about this story »

NOW WATCH: The dangerous mistake people make about heart attacks is assuming the signs are the same for everyone

Amazon is updating Alexa to have more natural interactions (AMZN)

$
0
0

This story was delivered to Business Insider Intelligence Apps and Platforms Briefing subscribers hours before appearing on Business Insider. To be the first to know, please click here.

Amazon is preparing to release three new updates to Alexa to reduce the friction associated with the platform, Ruhi Sarikaya, who heads the Alexa Brain group, announced last week.

Discoverability of Third-Party Alexa Skills

This is important because, although Amazon has an early lead in the voice assistant and smart speaker market — Amazon commanded a 72% share of the total US installed base of smart speakers in 2017 — recurrent updates to the platform will help the company sustain dominance as new competitors enter the space.

Here’s how Amazon is enabling more natural interaction with Alexa to fuel consumer engagement with the platform:

  • Making it easier to discover and engage with skills using natural phrases and requests. Amazon is introducing a new Alexa capability called Skills Arbitration to enable Alexa users to automatically find, enable, and launch skills using natural phrases and requests via machine learning. As discovery of Alexa skills becomes increasingly harder with the addition of more skills — there are currently 40,000 Alexa skills — Skills Arbitration will become progressively more important for consumers and developers.
  • Improving contextual awareness in follow-up questions for more natural conversations. Alexa’s new Context Carryover feature enables the voice assistant to understand and follow conversational questions and follow-up questions, without having to repeat the “Alexa” wake-up word. For instance, Alexa users can say “Alexa, how is the weather in New York?” and follow up with “What about this weekend?” or “How long does it take to get there?”
  • Allowing users to offload important information to Alexa.The new memory feature will enable Alexa to store arbitrary information, like important dates the user previously discussed, and recall that information later on. For example, users can ask Alexa to remember a friend’s birthday and order flowers for delivery that day. Google Assistant boasts a similar memory feature.

The latest announcements could aid Amazon in bolstering usage of its voice assistant platform. At the moment, voice assistants are used primarily for activities like checking the weather, receiving news updates, or making calls. The development of new experiences that expand voice assistants' ability to complete more complex interactions will likely help voice assistants and smart speakers occupy a growing amount of consumers' time.

The voice app ecosystem is booming. In the US, the number of Alexa skills alone surpassed 25,000 in January 2018, up from just 7,000 the previous January, in categories ranging from music streaming services, to games, to connected home tools.

As voice platforms continue to gain footing in homes via smart speakers — connected devices powered primarily by artificial intelligence (AI)-enabled voice assistants — the opportunity for voice apps is becoming more profound. However, as observed with the rise of mobile apps in the late 2000s, any new digital ecosystem will face significant growing pains, and voice apps are no exception. Thanks to the visual-free format of voice apps, discoverability, monetization, and retention are proving particularly problematic in this nascent space. This is creating a problem in the voice assistant market that could hinder greater uptake if not addressed.

Business Insider Intelligence, Business Insider's premium research service, has written a detailed report on voice apps that explores the two major viable voice app stores. It identifies the three big issues voice apps are facing — discoverability, monetization, and retention — and presents possible short-term solutions ahead of industry-wide fixes.

Here are some of the key takeaways from the report:

  • The market for smart speakers and voice platforms is expanding rapidly. The installed base of smart speakers and the volume of voice apps that can be accessed on them each saw significant gains in 2017. But the new format and the emerging voice ecosystems that are making their way into smart speaker-equipped homes is so far failing to align with consumer needs. 
  • Voice app development is a virtuous cycle with several broken components. The addressable consumer market is expanding, which is prompting more brands and developers to developer voice apps, but the ability to monetize and iterate those voice apps is limited, which could inhibit voice app growth. 
  • Monetization is only one broken component of the voice app ecosystem. Discoverability and user retention are equally problematic for voice app development. 
  • While the two major voice app ecosystems — Amazon's and Google's — have some Band-Aid solutions and workarounds, their options for improving monetization, discoverability, and retention for voice apps are currently limited.
  • There are some strategies that developers and brands can employ in the near term ahead of more robust tools and solutions.

In full, the report:

  • Sizes the current voice app ecosystem. 
  • Outlines the most pressing problems in voice app development and evolution in the space by examining the three most damning shortcoming: monetization, discoverability, and retention. 
  • Discusses the solutions being offered up by today's biggest voice platforms. 
  • Presents workaround solutions and alternative approaches that could catalyze development and evolution ahead of wider industry-wide fixes from the platforms.

Interested in getting the full report? Here are two ways to access it:

  1. Subscribe to an All-Access pass to Business Insider Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >>Learn More Now
  2. Purchase & download the full report from our research store. >> Purchase & Download Now

Join the conversation about this story »

Meet Grimes, the Canadian pop star who streams video games and is dating Elon Musk (TSLA)

$
0
0

elon musk grimes met gala 2018

At Monday's Met Gala, several freshly minted celebrity couples made their debut. But perhaps the most surprising new pairing of the evening: the billionaire tech exec Elon Musk and the Canadian musician and producer Grimes.

While Musk has been known to date successful and high-profile women, the two made a seemingly unlikely pairing. Shortly before they walked the red carpet together, Page Six announced their relationship and explained how they met — over Twitter, thanks to a shared sense of humor and a fascination with artificial intelligence.

For those who may be wondering who Grimes is, here's what you need to know about the Canadian pop star.

SEE ALSO: How to dress like a tech billionaire for $200 or less

Grimes, whose real name is Claire Boucher, grew up in Vancouver, British Columbia. She attended a school that specialized in creative arts but didn't focus on music until she started attending McGill University in Montreal.

Source: The Guardian, Fader



A friend persuaded Boucher to sing backing vocals for his band, and she found it incredibly easy to hit all the right notes. She had another friend show her how to use GarageBand and started recording music.

Source: The Guardian



In 2010, Boucher released a cassette-only album called "Geidi Primes." She released her second album, "Halfaxa," later that year and subsequently went on tour with the Swedish singer Lykke Li. Eventually, Boucher dropped out of McGill to focus on music.

Source: The Guardian, Fader



See the rest of the story at Business Insider

Here's everything Google unveiled at its biggest conference of the year (GOOG, GOOGL)

$
0
0

Sundar Pichai

Google I/O, the search giant's annual developer conference, kicked off on Tuesday. Google I/O is typically where executives and managers reveal the company's plans as well as some new products. This year was no different.

The main event was held in Mountain View, a stone's throw from Google's headquarters. Google made a lot of announcements, especially those pertaining to advancing the company's artificial-intelligence tools.

Here's a look at everything Google announced at I/O:

SEE ALSO: Google's biggest event of the year kicks off on Tuesday — here's what to expect

9:55 am: We're just waiting for the show to start. Everyone is taking their seats and listening to some nice chillwave music in the meantime. Go ahead and look up "chillwave" while we wait.



9:57 am: There's a festive feeling as usual. Lots of flags from countries all over the world.



10:00 am: The show has begun! Google's playing a cute little video.



See the rest of the story at Business Insider

Inside Ocado's new warehouse where thousands of robots zoom around a grid system to pack groceries

$
0
0
  • The robots can pack 65,000 grocery orders every week.
  • They move along a grid using an air traffic control system to ensure they don't crash into each other. 
  • The robots have a top speed of 4 metres per second and are battery operated.

 

Ocado has released footage from inside its newest automated warehouse in Andover, UK.

The facility is fitted with automated robots that take crates of products to pick stations where picking robots or humans then assemble the orders for shipping.

The robots also replace empty crates allowing the facility to process up to 65,000 orders every week.

They are battery powered and can dock at a charging bay to top up.

Watch the video to see how it all works.

Produced by Charlie Floyd

SEE ALSO: This blacksmith makes bespoke kitchen knives that cost £700 using traditional method

Join the conversation about this story »

15 mind-blowing announcements Google made at its biggest conference of the year (GOOG, GOOGL)

$
0
0

Google I/O 2018 sundar pichai

Google I/O, the search giant's annual developer conference, kicked off on Tuesday.

It began with a two-hour presentation from Google in which top executives took the stage at the Shoreline Amphitheatre in Mountain View, California, to showcase the latest developments in Android, Google Assistant, Google Maps, Google Photos, artificial intelligence, and much more.

While there were a ton of announcements, these were the 15 biggest highlights from the Google I/O keynote.

1. Gmail can now autocomplete entire emails (!) with a new feature called Smart Compose — just keep hitting the tab button, and Google will autocomplete your message. You can switch it on right now as part of the new Gmail experience Google is rolling out.

Read more about Smart Compose »



2. A new Google Photos feature called Suggested Actions can spot friends in your photos and offer to share those photos with those people with the press of a button.



3. Google Photos is more powerful now. You can instantly turn photos of documents into PDFs. You can also remove color from your photos — even just in certain areas — or re-colorize your old black-and-white photos of your relatives.



See the rest of the story at Business Insider

Alphabet's biggest bear on Wall Street breaks down the 2 risks he's worried about (GOOGL)

$
0
0

google ceo sundar pichai

  • Alphabet just had its developer conference, where it explains the new technologies it's investing in. 
  • One cutting edge technology is artificial intelligence. 
  • Wall Street's biggest Alphabet bear, Pivotal Research's Brian Wieser isn't too hot on Alphabet's artificial intelligence business. 
  • And he also thinks AI doesn't help advertising revenue grow, which he says is bound to decelerate.
  • Watch Alphabet trade in real time here. 

Alphabet just held its annual I/O Conference, where it tells investors and analysts what new technologies it's investing in. The company gave investors the run down on how it's improving artificial intelligence and machine learning on its various user platforms like Gmail and Google Search. 

But Pivotal Research analyst Brian Wieser, Wall Street's biggest Alphabet bear, told Business Insider he's skeptical of two key themes in its business: the company's artificial intelligence businesses and the digital-advertising market.     

Alphabet's main artificial intelligence product, Google Home, hit the market in November 2016, but the company doesn't show much financial information on its artificial intelligence business, so investors can't be armed with information needed to make bets on its impact. 

But Alphabet's AI business isn't just about selling new products. It's also used to enhance machine-learning algorithms on Google platforms. For example, Gmail can now auto complete entire emails through "Smart Compose."

But Wieser says those developments don't have much impact, and they aren't doing much to drive advertising dollars. 

"These aren't material businesses yet," Wieser said. "It's so small relative to the rest of the business that it's barely worth attempting to model separately from the rest of the business."

His point about AI not necessarily driving ad dollars is related to a broader risk he warned of: a slowing digital-advertising market. 

"The total ad market is relatively fixed, and the share digital can take of that market is relatively fixed, and so that yields a lower revenue growth from advertising than most people think," he said.

While Wieser sees a looming slowdown in global digital-advertising growth, recent data from Magna Global shows that market was $209 billion in 2017, up 19% from a year earlier. And 2017 was the first year in which digital ad spending beat television ad spending, taking 41% of the total advertising market. Television ad spending captured 35%. 

Wieser's price target on Alphabet is $970, about 11% below where it's currently trading. 

Alphabet is up 1.71% this year. 

Screen Shot 2018 05 09 at 2.37.43 PM

SEE ALSO: Credit Suisse continues to bolster its stock trading unit with 2 more senior hires — including a new head of equity sales in the US

Join the conversation about this story »

NOW WATCH: Jared Kushner and Ivanka Trump tried to cut a secret deal with Planned Parenthood — here's what happened

Google has wild new technology that sounds like a real human on the phone, and people already have really strong opinions about it (GOOG, GOOGL)

$
0
0
  • The video above shows the first public demonstration Google Duplex, a new experimental feature that lets Google Assistant call businesses on your behalf to get information or make reservations.
  • It was easily the wildest announcement at Google I/O, the company's annual developer conference.
  • People are already forming some very strong opinions about Google Duplex and its various technological and societal implications.

 

Google Duplex was the talk of Google I/O, the company's annual developer conference that kicked off this week.

Google CEO Sundar Pichai unveiled the new product himself on Tuesday: Basically, you can ask Google Assistant to call a business on your behalf, and Google's AI will schedule an appointment for you. Google demoed two phone calls on stage to give people a taste of what to expect.

For the most part, people focused on two aspects of Google Duplex:

  1. How natural it sounded. Google Duplex uses Google's new natural-sounding AI, which adds its own little air-fillers between words in the same way humans do, like "um," and "uh."
  2. The fact the people on the phone didn't seem to know they were talking to computer software. Pichai did mention during the presentation that Google is working hard to "get the user experience and the expectations right for both businesses and users," but people think final product should include some kind of greeting so they're aware they're talking to Google Assistant and not an actual human being.

And so, reactions to Google Duplex haved mainly played out in two ways: People are blown away by what Google has created, and how far their AI efforts have come. But people are scared, too, and they're concerned about what this could mean for artificial intelligence and the future of human interaction.

Some people believe Google Duplex presents a deeper moral issue. 

For what it's worth, Google insiders told Business Insider that the company will likely tweak the final version of Google Duplex so people feel comfortable using it — and that would likely include communicating that it is, in fact, a machine talking on a person's behalf, but Google could also remove some of the "ahs" and "ums" that make some people feel uncomfortable.

Still, many are excited by the possibilities presented by Google Duplex:

Google Duplex is likely years away from a commercial rollout, and it will almost certainly look different by the time it's released to the public.

Still, regardless whether you think Google Duplex is good or bad, it's incredible to see how far Google's AI has come in the last decade.

Years ago, the main thrust of these personal assistants was simple stuff: getting the weather, or setting a timer without needing to open an app. Soon, though, Google Assistant could be calling businesses on your behalf and setting up appointments. Who knows what the next 10 years will bring: Maybe Google Assistant will be able to do actual work for you, or apply to jobs for you in the background, like the way AI is portrayed in the 2013 movie "Her."

One thing is certain: As technologies get more advanced, our relationships with them will only get more complicated.

SEE ALSO: Google insiders say the final version of Duplex, the stunning AI bot that sounded so real it fooled humans, may be purposefully made less scary

DON'T MISS: 15 mind-blowing announcements Google made at its biggest conference of the year

Join the conversation about this story »

NOW WATCH: Gaming while black: How racist trolls are still dominating video games

Nvidia's bitcoin boom is over, but this investor says the bigger opportunity is just starting (NVDA)

$
0
0

jensen huang drive nvidia ces

  • Nvidia gave its first-ever peek at its crypto mining-related business on Thursday.
  • But Benjamin Lau, the chief investment officer of Apriem Advisors, says Nvidia has another opportunity that's much bigger than crypto mining.
  • Booming demand for Nvidia's chips for artificial intelligence and data centers should drive the company's business for the near future, he said.

 

Graphics chipmaker Nvidia pulled the curtain back on its bitcoin mining-related business for the first time on Thursday. But the real game-changing opportunity for Nvidia is not inside the crowded cryptocurrency mines — it's in the wide open field of artificial intelligence.

So says Benjamin Lau, the chief investment officer of Apriem Advisors, a firm with $650 million under management that's betting big on Nvidia. 

Nvidia's business designing chips for AI uses, which could upend industries from healthcare to transportation, and chips for computer data centers are the future, said Lau.

"They have a great platform and not a lot of really meaningful competition in that space," said Lau, whose firm is long Nvidia's stock.

Nvidia is already seeing significant success in those areas. In the first quarter, its data center business brought in $701 million in revenue, which was up 71% from the year-ago period. Sales of its latest Tesla chips — which the company designed for AI processing — helped boost the data center business, the company said.

Data center sales were reportedly below analysts' estimates, which may have explained the sell-off in Nvidia's stock in after-hours trading following its report. In recent trading, Nvidia's shares were down $8.53, or 3%, to $251.60.

Still, Lau was impressed with the business' performance in the period and sees it continuing to drive Nvidia's overall results."The AI stuff and the data center stuff is going to be the short-term boost" for Nvidia, Lau said.

The investor is also optimistic about the company's automotive efforts, although those have been mired in controversy of late. Nvidia has been developing chips and other technology for use in self-driving cars.

After an Uber autonomous car killed a pedestrian in Arizona, Nvidia announced that it would suspend its own autonomous vehicle tests. Uber's self-driving cars use Nvidia graphics processors, but not its autonomous vehicle system.

The bitcoin business is in the past, the autonomous car business is the future

Nvidia's automotive business totaled $145 million in sales in the first quarter, up just 4% from the year-ago period.
Benjamin Lau, chief investment officer of Apriem AdvisorsBut it's still early days in the development of self-driving cars, Lau said.

"The auto stuff is going to be the future for them," he said. But he acknowledged that "there will be speed bumps with autonomous driving."

One area Lau isn't counting much on is cryptocurrency mining. For the first time ever, Nvidia disclosed in its earnings report its cryptocurrency-related sales, saying they hit $289 million in its first quarter.

Nvidia and rival AMD have been boosted in recent years by the growing popularity of cryptocurrencies. So-called miners have bought up their graphics processors to help solve the complex mathematical problems involved in creating new cybercoins.

But that business appears to be waning as prices of bitcoin and other cybercurrencies have slumped, the cost involved in mining them have risen, and as miners rely increasingly on application-specific integrated circuits, or ASICS, which are chips designed for particular purposes. Nvidia itself predicted sales would slump in its second quarter, something that jibes with Lau's expectations.

"It's not going to be a driver going forward," he said. "It was a nice boost while they had it."

Nvidia's stock has soared over the last two years amid burgeoning sales and earnings. But the company's sales are still relatively small compared with other big chipmakers, such as Qualcomm and Broadcom, Lau noted.

"They have a lot of room to grow in this area," he said.

SEE ALSO: Intel may be flying high, but it faces plenty of challenges ahead

Join the conversation about this story »

NOW WATCH: How does MoviePass make money?

Google's AI demo at this year’s I/O has sparked a huge row about ethics

$
0
0

Sundar Pichai Google I/O CEO

  • At the Google I/O developer conference 2018, Google CEO Sundar Pichai revealed his company's newest AI initiative, Google Duplex.
  • It allows Google Assistant to make calls for you, reserve tables, book appointments and much more.
  • Experts have warned of the possible misuse of this feature by marketers, politicians, and businesses.


Technology and ethics have always been at odds with each other. Ask around: many feel that technology companies have no ethics.

Social media fuels that debate from time to time, spreading fake news, extremism and more. But an AI presentation by Google at its developer conference this year may have sparked the ultimate debate.

On stage at Google I/O 2018, Sundar Pichai proudly showed his company's newest AI initiative. It's called Google Duplex and it allows the Google Assistant to make calls for you, book tables, appointments and much more.

Pichai stated that the technology is still in development, but these are already signs that Google is getting creepily good at artificial intelligence.

The ethical conundrum

When it rolls out for users, the Assistant, when booking a haircut, won't tell the salon employee that they're speaking to a robot. Google told CNET that the Assistant will "likely tell the person on the other end of the line that he or she is talking to a digital personal assistant."

That's an ethical dilemma right there. Experts have suggested possible misuse of this feature by marketers (to make unsolicited robocalls), political parties using this to make pitches and so on. Yet, that may not be the biggest problem.

As countries scramble to regulate technology, Google's demo may give them food for thought. While the calls are obviously initiated by you, one could question who holds the responsibility for information shared over them. If your son sets an appointment using your phone, an appointment you aren't aware of until the calendar notification rings, are you liable to honour it?

Or when you make calls to local businesses, that's data that can't be easily obtained. However, when the Google Assistant makes the same call, it does so from a Google server. Who owns this data? Can Google access it willy-nilly? The company already has access to almost everything you do in a day, but should it also know each and every appointment you're setting?

The Assistant won't just make the call here, it will block time on your calendar. If done manually, a human could easily write, "hair appointment" instead of mentioning the actual name and place of the salon. If Google can't access your location at all times, it at least keeps some of your information private.

And last but not the least, what happens when businesses start using this method? Robotic sales calls aside, one wonders what would happen if the robot made an error when calling a customer. Would a business be liable for the robot's error?

SEE ALSO: Here's everything Google unveiled at its biggest conference of the year

Join the conversation about this story »

NOW WATCH: Jeff Bezos on breaking up and regulating Amazon


Google's eerily realistic new AI will identify itself when talking to people, says Google

$
0
0

sundar pichai google event


 

Google introduced their newest jaw-dropping feat at the I/O conference this week: a hyper-realistic sounding chatbot that will be able to make phone calls for you.

The demonstration received both praise and skepticism, with some calling in to question the ethics of an AI that cannot easily be distinguished from a real person's voice.

Today, a Google spokesperson confirmed in a statement to Business Insider that the creators of Duplex will "make sure the system is appropriately identified" and that they are "designing this feature with disclosure built-in."

In a demonstration of the software, called Google Duplex, the voice used human-like stammers such as "um," and sounded so realistic the humans on the other end of the line seemed to be completely unaware they had actually been chatting with an AI, causing many tech influencers to debate on Twitter and elsewhere whether Duplex and other chatbots should be required to identify themselves to humans.

Here's the full statement from Google:

"We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product."

Google CEO Sundar Pichai preemptively addressed ethics concerns in a blog post that corresponded with the announcement earlier this week, saying:

"It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right."

In addition, several Google insiders have told Business Insider that the software is still in the works, and the final version may not be as realistic (or as impressive) as the demonstration.

You can watch the full Google Duplex demonstration here:

SEE ALSO: Google insiders say the final version of Duplex, the stunning AI bot that sounded so real it fooled humans, may be purposefully made less scary

Join the conversation about this story »

NOW WATCH: Gaming while black: How racist trolls are still dominating video games

A former Googler leading the charge against AI weapons says her time at Google taught her that even 'nice' people can make bad moral decisions (GOOGL, GOOG)

$
0
0

Larry Page

  • Lilly Irani, an associate professor and former Google employee has co-authored a letter signed by more than 200 academics and researchers demanding that Google pull out of a controversial military program and join in calling for a ban on autonomous weapons.
  • Google's motto may be "Don't be evil," but Irani says Google is staffed by humans and humans don't always make the right call.  She says she saw that herself during her time at Google. 
  • Irani and the other signers of the letter want a larger debate on autonomous weapons. 


Lilly Irani wasn’t all-together surprised to see Google, her former employer, caught up in a controversy over management’s decision to participate in Project Maven, a military program that critics say could help improve the accuracy of drone missile strikes.

An early Google employee for four years before leaving to go to grad school in 2007, Irani says she knows that good people work at Google; people interested in making the world a better place.

But she also knows that good people don’t always make the right call. Even all those years ago, during Irani's stint as a software-product designer at Google, she saw how financial pressures could bear down on managers just like they do at other profit-making companies.

And like elsewhere, she said these forces sometimes lead people to make questionable moral or ethical decisions.

She recalled that while working on Google’s search history, one of her project managers told her: “Privacy is kind of like boiling a frog. If you go too far or too fast, people will freak out. But if you do it little by little, people will slowly get used to what you’re doing.”

Irani said at the time she didn’t feel she could challenge the assertion. She says now that she believes she was one of many tech workers back then who wanted to discuss some of the moral responsibilities facing big tech companies but didn’t know how.  That’s one of the reasons why she’s now speaking out about Project Maven.

She is one of the authors of a letter published Monday and signed by at least 260 researchers, academics and scholars that demands Google pull out of Project Maven and commit to never developing military technologies. The signers, including Noam Chomsky, the MIT scientist and political activist, also want Google to join them in calling for a ban on autonomous weapons. A Google spokeswoman did not respond to an interview request. 

The fact that they are so nice and well meaning is an important sign of danger

droneThe past several months have certainly been a difficult period for Google’s efforts in artificial intelligence. Just last week during Google’s I/O developer conference, the company demonstrated an as-yet unreleased version of Google Assistant, called Duplex, that can carry on conversations. During the demo, Duplex successfully booked a hair appointment without the woman on the other end of the line ever knowing she was speaking to an automated program.

Some in the media called the technology “creepy.”

And in April, The New York Times reported that a petition had circulated within Google demanding management end involvement with Project Maven and commit to never developing weapons. More than 3,000 Google workers signed, according to the Times. On Monday, Gizmodo reported that a dozen Google employees have decided to resign in protest over the issue. 

Project Maven is an effort to help the Pentagon use artificial intelligence to interpret surveillance video. Sounds harmless enough but critics say the technology could help improve the accuracy of missile strikes.

“With Project Maven, Google becomes implicated in the questionable practice of targeted killings,” read a copy of the letter co-authored by Irani, now an assistant professor at the University of California, San Diego. “These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage.”

If Google’s leaders didn’t know it before, they are now fully aware that AI spooks many people — and the prospect of combining AI with weapons is especially controversial. The researchers and academics said in their letter that if Google’s managers assist the Pentagon with AI, the company is helping to move the world closer to “authorizing autonomous drones to kill automatically, without human supervision or meaningful human control.”

Killer robots sound scary but is the statement true? Google has said in the past that company’s contribution to Maven is not offensive in nature and won’t be used to kill people.

And what if Google doesn’t participate? One of the company’s rivals will likely do the work and pocket the fees instead. In the end, signing petitions and tendering resignations from Google probably won’t stop the military from obtaining AI technology.

Irani said that Google should always at least listen to workers but regardless, she would like to see a much larger debate take place across society about autonomous weapons and AI. She doesn’t think Google or the military should have the final word on AI weapons.

According to Irani, the fact that a company like Google, with its “Don’t be evil” motto, can find itself linked to a program associated with Hellfire missiles, Predator drones and "surgical strikes" is an indication that something is wrong with the current state of affairs.

“Google is full of super nice, very intelligent people, many of whom generally want the best for the world,” Irani said. “But even at Google we get a situation where our data might be integrated into the fabric of unaccountable killing. The fact that they are so nice and well meaning and this activity is ongoing is an important sign of danger.”

SEE ALSO: Thousands of Google employees asked CEO Sundar Pichai to stop providing AI tech for the US military's drones

Join the conversation about this story »

NOW WATCH: How to avoid having your computer hacked

Gmail can now autocomplete entire emails with a new feature called Smart Compose — here's how to turn it on

$
0
0

google io smart compose

One of the best announcements that came out of this year's Google I/O was a new Gmail feature called Smart Compose, which can autocomplete entire emails for you.

Unlike many of the other announcements from Google I/O, Gmail Smart Compose can actually be switched on and used right now. It's all part of the new Gmail experience that Google has been rolling out to customers.

Here's how to turn on Gmail Smart Compose, how to use it, and what it's like to use:

SEE ALSO: 15 mind-blowing announcements Google made at its biggest conference of the year

DON'T MISS: Here's everything Google unveiled at its biggest conference of the year

The very first thing you'll need to do is activate the new Gmail experience, if you haven't already. (Don't worry, you can always go back to the "Classic" look at any time.)



You'll get a cute little welcome greeting that lets you know you've activated the new Gmail design.



Now that you're in the new Gmail, visit your Settings. Click the gear icon in the top-right corner of the screen.



See the rest of the story at Business Insider

How artificial intelligence is cutting costs, building loyalty, and enhancing security across financial services

$
0
0

maturity of AI solutions

This is a preview of a research report from Business Insider Intelligence, Business Insider's premium research service. To learn more about Business Insider Intelligence, click here

Artificial intelligence (AI) is one of the most commonly referenced terms by financial institutions (FIs) and payments firms when describing their vision for the future of financial services. 

AI can be applied in almost every area of financial services, but the combination of its potential and complexity has made AI a buzzword, and led to its inclusion in many descriptions of new software, solutions, and systems.

This report from Business Insider Intelligence, Business Insider's premium research service, cuts through the hype to offer an overview of different types of AI, and where they have potential applications within banking and payments. It also emphasizes which applications are most mature, provides recommendations of how FIs should approach using the technology, and offers examples of where FIs and payments firms are already leveraging AI. The report draws on executive interviews Business Insider Intelligence conducted with leading financial services providers, such as Bank of America, Capital One, and Mastercard, as well as top AI vendors like Feedzai, Expert System, and Kasisto.

Here are some of the key takeaways:

  • AI, or technologies that simulate human intelligence, is a trending topic in banking and payments circles. It comes in many different forms, and is lauded by many CEOs, CTOs, and strategy teams as their saving grace in a rapidly changing financial ecosystem.
  • Banks are using AI on the front end to secure customer identities, mimic bank employees, deepen digital interactions, and engage customers across channels.
  • Banks are also using AI on the back end to aid employees, automate processes, and preempt problems.
  • In payments, AI is being used in fraud prevention and detection, anti-money laundering (AML), and to grow conversational payments volume.

 In full, the report:

  • Offers an overview of different types of AI and their applications in payments and banking. 
  • Highlights which of these applications are most mature.
  • Offers examples where FIs and payments firms are already using the technology. 
  • Provides descriptions of vendors of different AI-based solutions that FIs may want to consider using.
  • Gives recommendations of how FIs and payments firms should approach using the technology.

Subscribe to an All-Access membership to Business Insider Intelligence and gain immediate access to:

This report and more than 250 other expertly researched reports
Access to all future reports and daily newsletters
Forecasts of new and emerging technologies in your industry
And more!
Learn More

Purchase & download the full report from our research store

Join the conversation about this story »

Amazon is selling AI software to cops that can scan for hundreds of thousands of faces for $6 per month

$
0
0

FILE PHOTO:  A police body camera is seen on an officer during a news conference on the pilot program of body cameras involving 60 NYPD officers dubbed 'Big Brother' at the NYPD police academy in the Queens borough of New York, U.S. on December 3, 2014.  REUTERS/Shannon Stapleton/File Photo

  • The American Civil Liberties Union and other privacy activists are asking Amazon to stop marketing a powerful facial recognition tool to police.
  • The AI tool called Rekognition, is already being used by at least one agency — the Washington County Sheriff’s Office in Oregon — to check photographs of unidentified suspects against a database of mug shots from the county jail.
  • Privacy advocates are concerned about the expanded use of facial recognition through body cameras worn by officers, which allows police to identify and track people in real time.

 

SEATTLE — The American Civil Liberties Union and other privacy activists are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to “easily build a system to automate the identification and tracking of anyone.”

The tool, called Rekognition, is already being used by at least one agency — the Washington County Sheriff’s Office in Oregon — to check photographs of unidentified suspects against a database of mug shots from the county jail, which is a common use of such technology around the country.

But privacy advocates have been concerned about expanding the use of facial recognition to body cameras worn by officers or safety and traffic cameras that monitor public areas, allowing police to identify and track people in real time.

The tech giant’s entry into the market could vastly accelerate such developments, the privacy advocates fear, with potentially dire consequences for minorities who are already arrested at disproportionate rates, immigrants who may be in the country illegally or political protesters.

“People should be free to walk down the street without being watched by the government,” the groups wrote in a letter to Amazon on Tuesday. “Facial recognition in American communities threatens this freedom.”

Amazon released Rekognition in late 2016, and the sheriff’s office in Washington County, west of Portland, became one of its first law enforcement agency customers. A year later, deputies were using it about 20 times per day — for example, to identify burglary suspects in store surveillance footage. Last month, the agency adopted policies governing its use, noting that officers in the field can use real-time face recognition to identify suspects who are unwilling or unable to provide their own ID, or if someone’s life is in danger.

“We are not mass-collecting. We are not putting a camera out on a street corner,” said Deputy Jeff Talbot, a spokesman for the sheriff’s office. “We want our local community to be aware of what we’re doing, how we’re using it to solve crimes — what it is and, just as importantly, what it is not.”

It cost the sheriff’s office just $400 to load 305,000 booking photos into the system and $6 per month in fees to continue the service, according to an email obtained by the ACLU under a public records request.

Amazon Web Services did not answer emailed questions about how many law enforcement agencies are using Rekognition, but in a written statement the company said it requires all of its customers to comply with the law and to be responsible in the use of its products.

The statement said some agencies have used the program to find abducted people, and amusement parks have used it to find lost children. British broadcaster Sky News used Rekognition to help viewers identify celebrities at the royal wedding of Prince Harry and Meghan Markle last weekend.

Last year, the Orlando, Florida, Police Department announced it would begin a pilot program relying on Amazon’s technology to “use existing City resources to provide real-time detection and notification of persons-of-interest, further increasing public safety.”

Orlando has a network of public safety cameras, and in a presentation posted to YouTube this month , Ranju Das, who leads Amazon Rekognition, said Amazon would receive feeds from the cameras, search them against photos of people being sought by law enforcement and notify police of any hits.

“It’s about recognizing people, it’s about tracking people, and then it’s about doing this in real time, so that the law enforcement officers ... can be then alerted in real time to events that are happening,” he said.

The Orlando Police Department declined to make anyone available for an interview about the program, but said in an email to The Associated Press that the department “is not using the technology in an investigative capacity or in any public spaces at this time.”

“The purpose of a pilot program such as this, is to address any concerns that arise as the new technology is tested,” the statement said. “Any use of the system will be in accordance with current and applicable law. We are always looking for new solutions to further our ability to keep the residents and visitors of Orlando safe.”

The letter to Amazon followed public records requests from ACLU chapters in California, Oregon and Florida. More than two dozen organizations signed it, including the Electronic Frontier Foundation and Human Rights Watch.

Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown University Law Center, said part of the problem with real-time face recognition is its potential impact on free-speech rights.

While police might be able to videotape public demonstrations, face recognition is not merely an extension of photography but a biometric measurement — more akin to police walking through a demonstration and demanding identification from everyone there.

Amazon’s technology isn’t that different from what face recognition companies are already selling to law enforcement agencies. But its vast reach and its interest in recruiting more police departments to take part raise concerns, she said.

“This raises very real questions about the ability to remain anonymous in public spaces,” Garvie said.

Join the conversation about this story »

NOW WATCH: Jeff Bezos reveals what it's like to build an empire and become the richest man in the world — and why he's willing to spend $1 billion a year to fund the most important mission of his life

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>