Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Here's when robots will start beating humans at every task

$
0
0

Don't expect to see a human behind the wheel of an 18-wheeler by 2027. Or a set of human hands performing a delicate surgery by 2053.

According to a new study from Oxford and Yale University researchers, those are the years artificial intelligence is slated to take over each of those tasks. And so it will go for millions of other jobs over the next 50 years, researchers find.

BI Graphics_MachinesThe study relied on survey responses of 352 AI researchers who gave their opinions on when in the future machines would replace humans for various tasks.

Lead investigator Katja Grace and her colleagues found the tasks most likely to get automated within the next 10 years were rote, mechanical tasks. Language translation could outpace human performance by 2024, responses indicated, and robots may be able to write better high-school-level essays than humans in 2026.

More complex and creative tasks, like writing books and performing high-level math, will take longer. Ultimately, the researchers found AI could automate all human tasks by the year 2051 and all human jobs by 2136.

"Advances in artificial intelligence ... will transform modern life by reshaping transportation, health, science, finance, and the military," the researchers wrote. "These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI."

SEE ALSO: Here are the ages you peak at everything throughout life

Join the conversation about this story »

NOW WATCH: IAN BREMMER: Trump's border wall won’t stop robots from stealing jobs


UK digital minister: AI needs a 'strong framework that carries the legitimate consent of the people'

$
0
0

Matt Hancock

UK digital minister Matt Hancock hinted on Tuesday that the government will look to start regulating companies and scientists working in the field of artificial intelligence (AI).

Speaking at the CogX AI conference in London, Hancock stressed the potential that AI has to change the world we live in but said that the technology must be regulated by a suitable framework.

"We need to make sure that there is a strong framework that carries the legitimate consent of the people but that can allow and encourage innovation and can be flexible and move fast too," Hancock said.

He added: "Some might say we should get out of the way and allow the technology to develop. Some say that the risks around the technology are too great and that we should not encourage development at all. But I would reject both of these extreme arguments.

Elon Musk"In almost every area of life we live in a framework of good, decent, and legal behaviour. In the best cases, in liberal democracies, this carries the legitimate consent of the people too. And a good framework allows innovation — fast and sometimes disruptive innovation — and allows new technology and especially a new underpinning technology that requires a regulatory and governance environment that can move fast too."

Many scientists believe that AI will be as smart as humans within the next few decades and some, including Oxford philosopher Nick Bostrom believe machines will quickly outsmart humans when this happens.

Scientists such as Stephen Hawking and renowned business leaders like Elon Musk and Bill Gates have warned that AI could be incredibly dangerous if it's not developed in the right way — for example, if biases creep into the data sets that self-learning algorithms are trained on.

Hawking told the BBC that it could wipe out humanity, for example, while Musk said the technology could be "more dangerous than nukes". But AI could also be used to help find a cure for cancer, control autonomous cars, and significantly cut energy consumption.

Join the conversation about this story »

NOW WATCH: Here are all the major changes coming to your iPhone

I tried the app that lets you search dating sites for celebrity lookalikes

$
0
0

Dating AI 1

The INSIDER Summary:

  • I tried an app that allows you to search for people on dating apps who look like celebrities.
  • It worked with some mixed results. 
  • I found out a lot of people use celebrities as their dating profile pictures.

If you've ever dreamed about swiping through Tinder and finding your favorite celebrity, you may just be in luck (sort of).

A new app called Dating AI allows you to search through dating apps to find people that look just like the celebrity of your choice. You can then view their profile and see if you could really be as compatible as you might hope you and your favorite A-lister would be.

It's a really strange concept, which is exactly why I had to try it for myself. With my boyfriend's blessing, I downloaded the app and went searching for my real life Chris Pine. 

The app is really simple to use at first. 

In the free version, you have the option of pre-selected photos of a few dozen celebrities. Some I recognized and some I didn't, but the biggest thing I noticed was how old and kind of "off" the photos were.

Most were taken at least a few years ago (think Khloe Kardashian pre-weight loss and Beyoncé during her "Crazy in Love" days) or were pictures of the actors in costume during TV shows. If you want Kit Harington, for instance, you're actually choosing Jon Snow and if you want Ariana Grande, the picture is of her rocking pink hair during her Nickelodeon days. 

I decided to start with Kit Harington — well, Jon Snow — to see if I could find anyone that could be my King in the North. I clicked his photo and it automatically pulled up profiles. I wasn't able to choose an age range or a gender preference in the free version. 

The first thing I saw was just how many people were using Jon Snow as their Tinder pic. 

Kit Harringotn Dating AI Skitch

Turns out a lot of people are using Kit's face for their profile pic. As for the people who didn't use Harington or Jon Snow's visage to represent them, we've blurred them out for their own privacy, but I just can't see the similarities. I think this app focuses a lot on facial hair, so mostly, it was just some impressive mustaches.

I wanted to see how a lady would stack up, so next I chose Selena Gomez. While a few people also chose her photo as their profile picture, these matches proved a little more accurate, in my opinion. I wonder how much that has to do with the fact that the people in the photos have a similar makeup style and eyebrows.

selena dating AI

In the full version of the app, you have the option of taking a photo, uploading a previously saved photo, or searching your Facebook friends' photos. I noticed that Chris Pine wasn't on the list of celebs in the free version so I quickly remedied that and uploaded a photo of him. 

An initial search of my beloved Chris Pine was a total dud. 

Chris Pine Dating AI

After doing a cursory glance through their profiles, most of the people it matched his photo with were people who identified as female. Though they all had beautiful blue eyes like Chris, this was where the similarities stopped. 

In the paid app, you can choose the gender and age range of people who you want to search so I narrowed it down to people who identified as men (though I don't know if they're interested in women, the app doesn't let you choose that).

chris_pine dating AI

These men actually looked a lot like Chris Pine! I was shocked and pleased and actually debated if one was really a photo of him. This particular set of results was great, and through clicking through some other celebrities, I found a lot of people who shared similar features with the stars. It really does just vary by person.

I also had to search myself for fun, and was pleasantly surprised with how much I think the people looked like me. They all had great eyebrows, so I'll take that as a win. 

This app is really fun to play with but it's not without some limitations.

To search by photos, specify gender, and actually see the profiles, you have to pay the steep $10 a month subscription charge, although they offer a one-week free trial.

With some apps like Match.com and Plenty of Fish, you can find and message the person. But with apps like Tinder where it requires a mutual match to message them, there really isn't much incentive to find them on this app. 

If you have a spare $10 a month and a burning desire to find a guy who looks like Chris Hemsworth, this may be the app for you. It'll sure make for an interesting opening line, at the very least.

Join the conversation about this story »

NOW WATCH: Dating app founder: Response rates go up 60% when your first message is like this

An Oxford University artificial intelligence startup has raised £17 million to check code for errors

$
0
0

DiffBlue

DiffBlue, an artificial intelligence startup spun out of Oxford University, has raised $22 million (£17.3 million) in Series A funding for technology which checks and corrects code.

The round was led by Goldman Sachs Principal Strategic Investments, alongside Oxford Sciences Innovations and the Oxford Technology and Innovations Fund.

The company was cofounded by an Oxford computer science professor, Daniel Kroening, and Sussex computer science professor Peter Schrammel. The startup claims to "understand" code, meaning it can carry out coding tasks considered too repetitive and boring for human developers.

One example is testing developers' code for bugs.

Currently, human developers have to write their own test code to find any bugs in their software. It is, according to DiffBlue cofounder Daniel Kroening, something programmers tend to dislike because it's labour-intensive and less satisfying than coding itself. He explained that it's a little like asking a schoolchild to stand up and read their essay in front of the class, then to explain everything that's wrong with it. Or at least, that's how developers feel.

"None of this is objectively true," he added. "Tests have enormous value because users don’t want to use buggy software. Nevertheless, this sort of thing gets a developer really very grumpy."

Business Insider asked how this might be different from a kind of spellcheck for code, something which surely seems easy enough.

"Tests aren't about spellchecking, they're about meaning," Kroening said. "It's like entrusting articles you write to spellcheck. If you say the president of the USA is Donald Duck, it's factually wrong, but spellchecker won’t flag it up.

"Tests aren’t meant to find typos, they’re meant to identify misbehaviour. That’s the same as a sentence not making sense, syntactically."

This is where AI comes in, because it requires intelligence to generalise from patterns.

 Kroening told Business Insider the eventual goal was to allow illiterate people to program.

"The way I like to get [people] interested is by saying you have a computer that improves itself, or thinks about its own programming," he said.

"In our long-term vision, we would enable people who can't read or write to do programming. What we are hoping to achieve over time is that a computer will be able to deduct what it is you wanted to do by being given an example, then generalising from it. That's the high-level starting point."

An example he gives is programming your toaster and kettle to make toast and tea for you in the morning. You probably couldn't pay a developer to do that for you, but it would still be useful.

This would require considerable leaps in DiffBlue's AI capabilities and is still the long-term vision.

The startup says its technology is used by all the major banks in the UK, though Kroening couldn't go into detail due to the confidential nature of the company's agreements. Currently it works with two programming languages, Java and C, though the company has plans to expand this.

Is there a chance that professional developers could find themselves being replaced by AI?

"This a concern that has never been raised by anyone actually doing the job," Kroening said. "[Testing is] such an unloved job. If you go to a company and relieve them of the testing, they will give you a hug."

The company is based out of Oxford, with the bulk of its staff coming from the university. It will use its funding to open a new office in London and hire sales and marketing staff. The company also wants to expand to San Francisco within the next year.

Kroening also told Business Insider that the company has ambitions to go public rather than sell. DiffBlue is part of Oxford University's spin-out incubator, Oxford Sciences Innovation, which recently hired a City broker to help its portfolio companies find funding or become listed.

"That is something which is a viable route," Kroening said of going public. "At the moment, our investors are minded to grow the company substantially, and that is quite feasible. Many companies in the UK are sold too early and that is something we want to avoid."

Join the conversation about this story »

NOW WATCH: Here's why Boeing 747s have a giant hump in the front

The 11 industries most under threat from artificial intelligence

$
0
0

robot helper

Artificial intelligence (AI) is expected to kill off 5 million human jobs by 2020 but some industries are more at risk than others.

UK chip designer ARM released a report on Tuesday highlighting which industries consumers expect to be disrupted the most by AI machines.

The report — carried out in partnership with Northstar Research Partners and based on responses from 3,938 consumers — found that people expect everything from banking to farming to be impacted by AI.

"The question is whether the impact will be positive or negative?" reads the ARM report. "On one hand, increased automation means greater efficiency and productivity, better employee safety and even a higher standard of living (more free time, lower prices). However, AI may also cause job losses and fundamentally change traditional employment."

The report goes on to question whether humans need a guaranteed "living wage" as robots take their jobs and whether companies should pay a "robot tax" as they are replacing human tax payers.

Here are the 11 industries people think are most under threat from AI machines:

11. Science (4%)



10. Healthcare/hospitals (4%)



9. Policing/security (5%)



See the rest of the story at Business Insider

This Google experiment wants artificial intelligence to help you draw (GOOG)

$
0
0

Sketch RNN   Sheep

A new experiment from Google is looking to help you sketch images faster and more accurately with the help of artificial intelligence (AI).

The software is called Sketch-RNN, and it's baked into a straightforward web app.

The idea is simple: Select one of the pre-existing objects, start drawing, and the software will try and guess the best way to automatically complete it.

Sketch-RNN's artificial mind is trained on a neural network fed with thousands of human-drawn doodles, like the ones found in past Google Brain efforts such as AutoDraw and Quick, Draw!.

In a blog post published earlier this year, Google said that the ultimate goal of these AI efforts specific to computer vision is to train machines to identify and recreate objects with an accuracy that mimics human thinking as closely as possible; in this case, the way we draw and connect lines and shapes when trying to sketch an image of a given object.

Sketch-RNN currently has three demos you can try in addition to the standard one: "Multiple Predict,""Interpolation," and "Variational Autoencoder."

"Multiple Predict" works much like the basic demo, but the software will show you multiple possible outcomes at once. For instance, you can begin to draw the body of a mosquito, and Sketch-RNN will show you a few ways it can complete the drawing.

Sketch RNN - Mosquito

"Interpolation" is a tricky one; the system takes two random images and tries to interpolate them to give you a hybrid result.

Google uses the example of a bicycle and a yoga position, and while the result — like in this case — may not necessarily make sense, it shows that the machine understands how to draw a third object based on the other two.

The last one, "Variational Autoencoder," actually asks you to draw a complete image of something. When you're done, it will try to guess your drawing style and give you nine different possible alternatives based on the way you sketched.

Sketch RNN   Cat

Sketch-RNN may not have the same wow factor as, say, AutoDraw, but it's nonetheless fascinating to see how much (and how fast) computers are getting good at visual recognition.

If you want to toy with Sketch-RNN, head over to Google's dedicated website here.

Join the conversation about this story »

NOW WATCH: These secret codes let you access hidden iPhone features

Leading Japanese asset manager Nomura is using AI to help portfolio managers digest data

$
0
0

Large FI Invesment Priority

This story was delivered to BI Intelligence "Fintech Briefing" subscribers. To learn more and subscribe, please click here.

Financial institutions (FIs) across all industry sectors are increasingly experimenting with AI technology in an effort to augment their human staff's capabilities, with leading Japanese asset manager Nomura Asset Management (NAM) becoming the latest incumbent to join the club.

It announced that it has conducted a proof of concept (POC) with consultancy and software provider Nomura Research Institute (NRI) — also part of Nomura Holdings, to which NAM belongs — to determine whether AI and natural language processing (NPL) could improve the accuracy of NAM portfolio managers' investment decisions.

The POC was designed to help portfolio managers handle growing volumes of data that can influence stock prices. The parties explained that managers have to take into account large volumes of qualitative information — like analyst reports, news articles, blog posts, and social media postings — when deciding how to weight stocks in a portfolio. However, data volumes keep increasing, the parties say, leaving managers struggling to make sound judgments. As such, part of the POC's goal was to convert this qualitative raw data into quantitative takeaways processed by AI on the managers' behalf. The AI was first trained on analyst reports to identify "positive" and "negative" language patterns in the data sources a manager would have to process. That enabled it to weigh up those factors to determine their overall effect on a stock.

Despite the hype, AI seems to be solving a prosaic but real problem for FIs. It's a hard fact that a more digitized world is generating ever-growing amounts of data, at a time when FIs increasingly depend on being able to process data effectively to remain competitive. When information volumes were lower, human processing power sufficed, but it's becoming impossible for humans to effectively cope with such quantities.

Although AI still falls short on soft skills, it is proving extremely capable of crunching vast amounts of information to derive complex insights, and as such, is perfectly positioned to solve this overload problem. As a result, at least some element of AI will likely soon become standard for all FIs.

Maria Terekhova, research analyst for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on core banking system overhauls that:

  • Looks at how legacy systems' structure, and how it makes effective data handling impossible.
  • Explains how new generation core systems are optimized to help banks make the most of their data.
  • Gives an overview of how banks should go about moving their organizations to new core systems.
  • Discusses the most common risks of overhauls, and how to avoid them to reap the benefits.

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and more than 250 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. » Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »

Salesforce's newest 'Einstein' AI tools can tell when people are mad in texts and emails (CRM, TWTR)

$
0
0

Salesforce Einstein

Watch out, human. The robots can detect your fear. And anger. And any other sentiment that you might share with a corporation while on social media.

At the TrailheaDX developer conference in San Francisco Wednesday, Salesforce announced three new tools that will make it easier for developers to incorporate artificial intelligence into custom apps.

It's great news for companies looking to increase the efficiency of their customer service and inventory management. It's terrible news for anyone whose job is to, well, provide customer service or take inventory. 

The new tools are the latest iteration of Salesforce's Einstein, the AI technology that runs on Salesforce and customizes analytics and insights to each customer. Now, programmers can use Einstein Platform Services to easily train this AI to meet their own ends. 

Among the new features is Einstein Sentiment, which can sort the tone of any given text as positive, negative or neutral. Developers can use this to create an application that can highlight angry tweets and emails. They could also use it to prioritize compliments and glowing reviews, if positive reinforcement is more of their thing. 

Another tool, Einstein Intent, gives developers the ability to sort customer inquiries by intent and then send relevant responses or personalized marketing. Salesforce imagines this tool being used by a retail company to build an app that identifies customers experiencing shipping problems, and then responds accordingly. 

The third new tool announced Wednesday, Einstein Object Detection lets developers train models to recognize multiple unique objects within a single image. It can also detect the location, size and quantities of objects. It's ideal for building apps to take inventory of products, like the number of boxes on a shelf. 

SEE ALSO: American Airlines looks to the IBM Cloud to end travel hell

Join the conversation about this story »

NOW WATCH: Ivanka Trump's Instagram put her at the center of a controversy over her lavish art collection


Nvidia is set to dominate the '4th tectonic shift' in computing (NVDA)

$
0
0

Lulea data center 5 - Facebook data center

Decades of work have paid off for Nvidia. The next computer revolution is here, and the company is set to dominate its competition, according to Jefferies.

"IBM dominated in the 1950's with the mainframe computer, DEC in the mid 1960's with the transition to mini-computers, Microsoft and Intel as PCs ramped, and finally Apple and Google as cell phones became ubiquitous," Mark Lipacis wrote in a note to clients. "We believe the next tectonic shift is happening now and NVDA stands to benefit the way these aforementioned tech giants did in prior transitions."

Nvidia has been working on its CUDA computing platform and its graphics processing unit (GPU) technology for years. Traditionally, a computer has worked in a linear way, processing one task at a time on the central processing unit (CPU).

Shortly after GPUs were introduced in the 1990s, programmers began using them to break tasks into lots of smaller problems and solving them all at the same time on the GPU. This is called "parallel processing."

For certain types of problems, like rendering lots of graphics elements in a video game, GPUs were far superior to the single-minded CPU. They were slower at single tasks, but could handle lots of problems at the same time. Nvidia developed a programming platform, called CUDA, to take advantage of the way their GPUs could handle these multi-faceted problems. CUDA made it easy to break traditional problems into multiple parts that ran much faster on a GPU than the traditional CPU.

Fast forward to modern times where artificial intelligence and deep learning technologies are the hot trends. Companies like Google, Tesla and Amazon are using artificial intelligence to program self-driving cars, conquer ancient board games and develop smart personal assistants. Luckily for Nvidia, artificial intelligence and deep learning programs are perfectly suited to run on its GPUs and CUDA platform.

Jefferies thinks these two technologies give Nvidia a huge advantage over the competition.

"We see NVDA as a major beneficiary of the 4th Tectonic Shift in Computing, where serial processing (x86) architectures give way to massively parallel processing capabilities as the next wave of connected devices approach 10b units by 2022," Jefferies said.

As tech giants build out new data centers to handle their ballooning artificial intelligence research, they often turn to Nvidia to supply the hundreds or thousands of GPUs they need. MIT recently said Nvidia has spent around $3 billion to develop its current data center chip, and it's a move that has paid off for the company. MIT named Nvidia as the smartest company in the world in 2017, in part, because of this investment.

Nvidia has been making waves in the autonomous-car business as well. The company recently announced partnerships with Baidu, Volvo and Volkswagen to improve their self-driving car technologies and its technology is already being used in vehicles made by Tesla, Audi and Toyota.

Cryptocurrency mining is another example of a process that runs better on GPUs. Nvidia has been raking in profits in that area too, and one Wall Street bank thinks it will be just another sector that Nvidia will come to dominate.

Investors have been rewarding Nvidia as it takes the computer world by storm. Shares of Nvidia are up 48.55% this year.

While it might take some time before Nvidia's $87.04 billion market cap comes close to the companies that dominated the last computing revolution (Alphabet at $598.61 billion and Apple at $751.88 billion), Jefferies has faith in the company. The investment bank raised its price target to $180, up about 19% from Nvidia's current price.

Click here to follow Nvidia's share price in real time.

Nvidia stock price

SEE ALSO: Nvidia is crowned the smartest company in the world right now

Join the conversation about this story »

NOW WATCH: An economist explains what could happen if Trump pulls the US out of NAFTA

Google launched an in-house AI fund to help startups turn sci-fi into 'nonfiction' (GOOG, GOOGL)

$
0
0

ai artificial intelligence

Google on Tuesday announced a new venture fund called Gradient Ventures, which aims to mentor and develop early-stage startups focused on artificial intelligence.

Gradient Ventures will be overseen by Anna Patterson, who has worked within many branches of Google but most recently served as the company’s VP of engineering in artificial intelligence, helping integrate AI into Google’s various products.

Google didn't specify the size of the fund, or the amount of money it intends to invest in startups, though it noted that the fund would focus on "early stage" startups, suggesting that investments will be relatively modest. Google said it would take minority stakes in the companies it backed.

AI has become one of the most valuable building-blocks for tech companies creating new generations of products such as virtual assistants and self-driving cars. Google, Facebook, Apple and Microsoft are all staffing up on AI experts and acquiring companies.

Google's new AI fund will also be led by Google engineering director Ankit Jain and Shabih Rizvi from Kleiner Perkins. Its advisors include several Google directors — Ray Kurzweil, Peter Norvig, Matias Duarte, and Marvin Chow — as well as other members across Alphabet’s properties, including Astro Teller from Alphabet’s moonshot company X, Daphne Koller from Calico, and Jeremy Doig from YouTube.

News of Google's AI fund surfaced back in May, but Google officially unveiled the fund on Tuesday.

Gradient Ventures currently lists four portfolio companies: Algorithmia, which manages a giant marketplace of algorithms, functions, and models for researchers and organizations to use; Cogniac, which helps companies develop neural networks; Cape, which lets you fly drones using your computer; and Aurima, which is building a “deep-learning awareness platform” in California.

In a blog post, Patterson says the goal of Gradient Ventures is “to help our portfolio companies overcome engineering challenges to create products that will apply artificial intelligence to today’s challenges and those we’ll face in the future.”

You can learn more about Gradient Ventures here.

SEE ALSO: Google built a tiny Street View car to map out one of the world's largest model cities, and the results are incredible

Join the conversation about this story »

NOW WATCH: We drove a brand-new Tesla Model X from San Francisco to New York — here's what happened

Microsoft is forming a grand army of experts in the artificial intelligence wars with Google, Facebook, and Amazon (MSFT, GOOG, GOOGL)

$
0
0

Eric Horvitz Microsoft Research

Artificial intelligence is fast becoming the next major battlefield between Silicon Valley's biggest companies, and Microsoft is putting its troops in formation.

On Wednesday morning, Microsoft plans to announce the creation of Microsoft Research AI, a dedicated unit within its global Microsoft Research division that will focus exclusively on how to make the company's software smarter, now and in the future. 

Make no mistake, Microsoft has long employed a veritable army of AI experts, who have contributed their expertise to products and services including Microsoft Translator, the Microsoft Cortana digital assistant, and even the infamous rogue Tay chatbot

The difference now, Microsoft Research Labs director Eric Horvitz tells Business Insider, is that this new organization will bring roughly 100 of those experts under one figurative roof. By bringing them together, Horvitz says, Microsoft's AI team can do more, faster, he says.

Horvitz describes the formation of Microsoft Research AI as a "key strategic effort;' a move that is "absolutely critical" as artificial intelligence becomes increasingly important to the future of technology. All in all, Microsoft Research AI encompasses about 1/10th of the staff of the overall Microsoft Research group, with plans to grow.

Plus, Microsoft is taking steps to make sure that artificial intelligence is being used responsibly, thanks to a new overseeing ethics board called Aether made up of top Microsoft execs from across the company. And a new "design guide" will provide Microsoft teams with insight into how to responsibly develop and deploy AI.

Bill Gates' dream, coming true

Horvitz says that the formation of this group speaks to Bill Gates' original vision for Microsoft Research when it was founded back in 1991. Microsoft's famous mission at the time was "a personal computer on every desk and in every home" — but Gates wanted to go a step further, so those same computers could see you, hear you, and talk to you. 

It was that vision that drew Horvitz to Microsoft in the first place, when he was a freshly-minted PhD in 1993. Now, with artificial intelligence getting better by the day, he's a key player in the push to get it the rest of the way there. 

"A lot of [Microsoft's AI priorities] draw on the original vision from [Microsoft co-founders] Bill Gates and Paul Allen," says Horvitz.

cortana reminders notification

Indeed, in the long term, Microsoft Research AI is making it a major goal to build "general AI;" the holy grail for artificial intelligence researchers. While current systems like Microsoft Cortana or Amazon Alexa seem intelligent, they can only really say what they've been programmed to say. A general AI would think and reason like a human.

Sooner than that, Microsoft is really focused on applying artificial intelligence to the tools that customers already use. Horvitz cites an internal presentation he recently attended about how Excel spreadsheets might be smart enough to catch formula errors before you make them. He calls this "augmenting human cognition." 

The evidence that Microsoft is on the right track can be found with the company's success at using its bots to beat high scores in games like "Ms. Pac-Man." 

"It's fun to compete and be at the top of the charts for these games and challenge problems," says Horvitz. Now, it's really time to double down and bring this tech to products people actually use.

Ethical dilemma

Technology is only one part of the problem. As any AI expert will tell you, there's tremendous potential for AI systems to go terribly wrong — not just in the Skynet "kill-all-humans" kind of way, but also in more insidious ways, like manipulating people to spend more money, or choosing which groups of humans to kill in a self-driving car accident.  Artificial intelligence carries a lot of power, and a lot of responsibility.

That's why Microsoft has also announced the formation of Aether (AI and ethics in engineering and research), a board of executives drawn from across every division of the company, including lawyers. The idea, says Horvitz, is to spot issues and potential abuses of AI before they start. It's a model that he hopes is adopted by others.

Emma Williams Microsoft

Similarly, Microsoft's AI design guide is designed to help engineers build systems that augment what humans can do, without making them feel obsolete. Otherwise, people might start to feel like machines are piloting them, rather than the other way around. That's why it's so key that apps like Cortana feel warm and relatable.

"Oh my goodness, those computers better talk to us in a way that's friendly and approachable," says Microsoft General Manager Emma Williams, in charge of the group behind the design guide. "As people, we have the control."

Microsoft isn't the only one thinking in this direction: Google, too, just launched a cross-company design group called People + AI Research, or PAIR, to keep its teams thinking in the same way. 

That kind of thing is fine by Horvitz, who sees it as critical and "pre-competitive" for these tech titans to work together and make sure that AI is being responsibility built. Once that's settled, then they can go back to fighting tooth and nail for the future of tech.

SEE ALSO: Microsoft finally releases its secret weapon in the cloud wars with Amazon and Google

Join the conversation about this story »

NOW WATCH: Google's DeepMind AI just taught itself to walk

Microsoft just released an incredible new app that helps blind people see the world around them (MSFT)

$
0
0

microsoft seeing ai app

Artificial intelligence is helping people do extraordinary things. At Microsoft, it's helping blind people see the world around them like never before.

In March of last year, Microsoft showed off a prototype of its Seeing AI app, which looked very promising at the time, but the company released the free app to all iOS users on Wednesday.

The Seeing AI app uses your smartphone's camera to identify things in your environment — people, objects, and even emotions — to provide important context for what's going on around you.

Take a look.

SEE ALSO: Microsoft CEO: The secret to a harmonious life is to stop obsessing over your smartphone

Meet Saqib Shaikh.



Shaikh lost the use of his eyes when he was just seven years old.



Shortly after, Shaikh was introduced to talking computers at a school for the blind. This inspired him to become a programmer.



See the rest of the story at Business Insider

Nvidia's secret weapon just led to an upgrade from Wall Street (NVDA)

$
0
0

NVIDIA employees

Nvidia has been on an upward tear all year, and Wall Street is just now figuring out why.

The company is well known by PC gamers for its graphics processing units, and Nvidia has gained a recent foothold in the data center and driverless-car sectors. But these are just symptoms of Nvidia's biggest weapon: a company culture of innovation.

That's according to a note to clients from SunTrust Robert Humphrey, who upgraded Nvidia to a Buy and raised the price target on the stock to $177, 12% higher than the current share price.

"We believe there are still aspects of the company, specifically related to its role in AI, that are under-appreciated by the Street," William Stein, an analyst at SunTrust, wrote. "Nvidia has a strong culture of innovation and desire to drive general purpose graphics processing unit (GP-GPU) computing adoption."

 The company has produced graphics processing units, GPUs, for the PC gaming community since the 90s but general purpose GPUs that the artificial community has started to use are a more recent technology. Nvidia first started focusing on the technology in the early 2000s.

Nvidia's CUDA computing platform was first released in 2006 and it allows programmers to take advantage of the "parallel" processing power of the company's GPUs to better expand their usefulness beyond processing video game graphics.

The CUDA software is one specific innovation Stein says gives Nvidia a huge competitive edge because it is harder to replicate than the company's hardware technology.

Nvidia also sponsors lots of academic research. The company has partnered with a large number of car and tech company data teams to aid them in their pursuit of autonomous vehicles. It also sponsors research on deep learnings and artificial intelligence, which is often run using the company's hardware and software.

This endless pursuit of innovation has led Nvidia to be crowned as the smartest company in the world by MIT, and Stein says it's more important than any single chip or technology the company produces today.

It's why, even though investors love Nvidia, Stein thinks there is more room for the stock to grow. He calls the culture of innovation at Nvidia a "deep and wide" moat that separates it's from traditional and startup competitors.

Investors have so far rewarded Nvidia for the innovations that come as a result of the company culture. Shares have increased 54.05% this year. Nvidia is currently trading at $157.01 and increased 2.18% on Wednesday after the upgrade by SunTrust.

Click here to read all of our coverage of Nvidia ...

 Nvidia stock price

SEE ALSO: Nvidia is set to dominate the '4th tectonic shift' in computing

Join the conversation about this story »

NOW WATCH: An economist explains what could happen if Trump pulls the US out of NAFTA

Microsoft's newest app uses AI to narrate the world (MSFT)

$
0
0

Quarterly Global AI Funding

This story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

On Wednesday, Microsoft unveiled Seeing AI, an iOS app that uses the smartphone camera to tell visually impaired users what’s in front of them, according to CNBC.

The technology — which uses computer vision to identify the objects in a user's environment to provide context on their surroundings — points to Microsoft’s AI clout. It also helps to position the firm as a front-runner in the rapidly growing mobile health ecosystem.

Seeing AI can read text from signs and documents aloud; describe places, people, and their emotions; recognize currency values; identify household products by scanning barcodes; and even provide instructions to users if the object they want described is not in-frame.

Microsoft is ramping up its efforts to integrate AI into all facets of its users’ lives as it jockeys against other tech titans in the AI landscape:

  • Microsoft’s presence in the AI ecosystem distinguishes it from competitors. Microsoft is competing against a variety of tech giants like Google, Amazon, Facebook, and Apple that are all looking to set the tone in the AI space. Dating back to the early 1990s with the creation of Microsoft’s research labs, the company has pioneered advancements aimed at developing solutions in the realms of computer vision, speech recognition, natural-language processing, and machine learning.
  • Microsoft is expanding its AI efforts into a whole range of its products. Last year, the company established the Microsoft AI and Research Group, which unites the company’s research organization with over 5,000 computer scientists and engineers dedicated to its AI product developments. Since then, Microsoft has accelerated the delivery of new capabilities that integrate AI, like its flagship Office 365 suite, its digital assistant Cortana, and chatbots on Skype.
  • Microsoft is leveraging its AI capabilities to create optimal healthcare solutions. Along with the release of the Seeing AI app, Microsoft has developed quite a few recent health-focused initiatives and solutions. Healthcare NExT is Microsoft’s recent initiative focused on healthcare transformation that leverages existing AI work and Azure cloud services. 

Embracing third-party platforms is a strategic play by Microsoft to stay relevant in the mobile ecosystem, which the company has struggled with. On Tuesday, the company ceased support for its Windows Phone 8.1 operating system. However, relying on third-party platforms can present challenges for Microsoft, since platforms like Apple and Google are capable of developing their own AI solutions and applications, and many consumers hold strong interest in staying within the same ecosystem.

To receive stories like this one directly to your inbox every morning, sign up for the Apps and Platforms Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

An AI researcher explains what scares him the most about robots

$
0
0

scene from iRobot

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring.

It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future.

Might I become “the destroyer of worlds,” as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn’t avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

Hal 9000The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in “2001: A Space Odyssey,” is a good example of a system that fails because of unintended consequences.

In many complex systems – the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant – engineers layer many different components together.

The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

Fear of misuse

irobot robotI’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution.

I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation.

Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Fear of wrong social priorities

Being a scientist doesn’t absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

irobotInstead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected – and get surgery performed by a tireless robot with a perfectly steady “hand.”

Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind’s existence in it probably doesn’t matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

SEE ALSO: Microsoft's newest app uses AI to narrate the world

Join the conversation about this story »

NOW WATCH: The inventor of Roomba has created a weed-slashing robot for your garden


Google is turning Street View imagery into pro-level landscape photographs using artificial intelligence

$
0
0

Google SV AI [1]

A new experiment from Google is turning imagery from the company's Street View service into impressive digital photographs using nothing but artificial intelligence (AI).

Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada's and California's national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.

The idea is to "mimic the workflow of a professional photographer," and to do so Google is relying on so-called generative adversarial networks (GAN), which essentially pit two neural networks against one another.

With this Google experiment, the first, "generative" model tries to fix a picture that has previously been messed with on purpose (with things like brightness and contrast changed at random), while the "discriminative" one analyses and compares the original (messed) shot and the fixed one.

Google SV AI [0]

The result is a software that is able to understand the principles of good photography (like, for instance, not oversaturating colours), and uses this knowledge to work its way through the scanned images that come from Google Maps.

When the AI system recognises a potentially interesting image, it first crops it, then tweaks things such as saturation and the strength of dynamic range, and applies a filter (Google calls it a "dramatic mask").

The results are — according to professional photographers — impressive. The photographers Google worked with on the small project ranked about two in every five shots as on par with either semi-pro or straight up professional grade shots.

Take a look for yourself at Google AI's work below.

Google SV AI [4]

Google SV AI [3]Google SV AI [2]

Join the conversation about this story »

NOW WATCH: This cell phone doesn't have a battery and never needs to be charged

The maker of the AK-47 made a robotic, AI gun system for Russia

$
0
0

kalashnikov robotic gun russia

The maker of the celebrated AK-47 rifle has unveiled a new robotic gun system for the Russian military that will use artificial intelligence to size up targets — then shoot.

Reports from TASS and others show a turret system that can be installed on vehicles and operated by remote control.

Sofiya Ivanova, the director of communications for Kalashnikov told TASS,"In the imminent future, the group will unveil a range of products based on neural networks. A fully automated combat module featuring this technology is planned to be demonstrated at the Army-2017 forum."

The reported added that it will be a "fully automated combat module based on neural network technologies that enable it to identify targets and make decisions."

Another report in Defense One said it appears to be capable of firing 25mm rounds like those used in anti-aircraft guns.

Defense One's editor Patrick Tucker wrote that the Russians are eager to use battlefield robots while the U.S. is not.

Kalashnikov's AK-47 is the most widely used weapon in the world, considered simple to use and very hard to break or jam. The Examiner recently reported that the Pentagon would like the rifles to be made in the U.S.

SEE ALSO: Watch a Russian MiG-29 Fulcrum catch fire while taking off in Belarus

Join the conversation about this story »

NOW WATCH: Watch Russia’s newest fighter jet in action — the MiG-35

It looks like Google’s voice assistant app is tanking on the iPhone two months after it launched (GOOG, AAPL)

$
0
0

Google CEO Sundar Pichai Google Assistant

Google Assistant doesn’t appear to be a big hit with iPhone owners.

The iOS version of Google’s voice assistant has been downloaded a total of 300,000 times since launching two months ago, according to data from analytics firm App Annie provided to Business Insider.

Other estimates are less flattering: Data from fellow analytics firm Sensor Tower provided to Business Insider says the app had garnered just 190,000 downloads on iOS as of Friday.

Google made the Assistant available as a standalone app on iOS devices this past May. The app is only available in the US thus far, and third-party estimates should always be taken with a grain of salt — but even with that said, the estimates are low enough to make demand for Google’s Siri rival appear fairly meager.

Google Assistant iPhoneGoogle did not immediately respond to a request for comment.

Both estimates paint a similar picture of the app’s trajectory: It started with a solid spike in downloads before leveling off sharply a couple of weeks after it launched.

In an email, an App Annie spokesperson said Google Assistant’s figures and download patterns have been similar to that of Google Allo, the Assistant-aided chat app Google launched to little fanfare on iOS last fall.

Sensor Tower’s estimates, meanwhile, peg the Assistant as the 26th most-downloaded Google app for the iPhone in the US since it launched. The app has averaged between 1,000 and 2,000 downloads per day since the second week of June, said Randy Nelson, the firm’s head of mobile insights.

Here’s how Sensor Tower lays out the app’s daily downloads since launch:

Google Assistant download estimatesThat the iOS version of Google Assistant isn’t lighting the world on fire isn’t necessarily a surprise or even an indictment of the Assistant itself. Past studies have said it’s more knowledgeable than peers like Siri or Amazon’s Alexa, and we’ve found it to compare favorably to the other major players — though every voice assistant continues to have issues with reliability and interoperability from time to time.

The fact that Google Assistant is available at all on iOS is still a boon for those who use Assistant-ready devices but don’t own an Android device, where Google Assistant is integrated on a deeper level. And in many ways, Google’s goal with the Assistant is to bypass the smartphone altogether; there’s the Google Home speaker, for one, but the company also made it possible for developers to build Google Assistant into other smart devices earlier this year.

google homeFrom a product perspective, Google’s main hangup on iOS is that Apple makes accessing the Assistant (or any third-party assistant) more cumbersome than using Siri. While Siri can be accessed at any time by holding down the home button or saying “Hey Siri,” Google Assistant requires you to open its app to work. The app also can’t perform some simple tasks, like setting alarms, as a result of it being a third-party service.

Google’s helper is still able to link up with more outside services and smart home devices than Siri, but its seemingly mediocre download numbers suggest most iPhone users either don’t need those kind of advanced features from a voice assistant just yet, or don’t want to bother with opening a separate app to use them.

Either way, iPhone owners are a sizable base in the US. If Google’s goal is to make the Assistant ubiquitous, it looks like it still has a ways to go to make its brand of voice tech something people are willing to go out of their way for.

SEE ALSO: The first real Alexa phone is here — here’s what it’s like

Join the conversation about this story »

NOW WATCH: Apple finally unveiled its Siri-powered version of Google Home and Amazon Echo — here's everything you need to know

The House of Lords is going to carry out a public inquiry into artificial intelligence

$
0
0

robot helper

The House of Lords has launched a public inquiry into advances in the field of artificial intelligence (AI).

The House of Lords said on Wednesday that the new Select Committee on Artificial Intelligence will "consider the economic, ethical, and social implications of advances in artificial intelligence."

AI is set to bring about major changes to the way humans live and work. Well-known scientists and entrepreneurs such as Stephen Hawking and Elon Musk have warned about the potential dangers superintelligent AI presents.

But their concerns are very much in the realm of science fiction at the moment and there are a range of more near-term risks that need to be considered, such as how we ensure humanity as a whole benefits from AI developments as opposed to certain countries or individuals.

The Committee will aim to answer a series of questions including:

  • How can the data-based monopolies of some large corporations, and the 'winner-takes-all' economics associated with them, be addressed?
  • Is the current level of excitement surrounding artificial intelligence warranted?
  • What role should the Government take in the development and use of artificial intelligence in the UK?

Lord Clement-Jones, chairman of the Select Committee on Artificial Intelligence, said in a statement:

"This inquiry comes at a time when artificial intelligence is increasingly seizing the attention of industry, policymakers and the general public. The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.

"We are looking to be pragmatic in our approach, and want to make sure our recommendations to Government and others will be practical and sensible. There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations."

The committee is inviting contributions from members of the public and organisations that have an interest in AI and public policy. Submissions must be with the Committee by September 6 if they are to be considered.

"If you are interested in artificial intelligence and any of its aspects, we want to hear from you," said Clement-Jones. If you are interested in public policy, we want to hear from you. If you are interested in any of the issues raised by our call for evidence, we want to hear from you."

The Committee will have to submit a report to government by March 31, 2018.

Join the conversation about this story »

NOW WATCH: Here's how Google Maps knows when there is traffic

Google DeepMind's CEO and Uber's chief scientist backed a UK chip company in a $30 million round

$
0
0

Graphcore

Graphcore, a Bristol-based machine learning startup, has raised $30 million (£23 million) as it prepares to ship its first artificial intelligence (AI)-focused computer chips later this year. Total investment in the company now stands at $62 million (£48 million)

Graphcore's chips, known as intelligence processing units (IPUs), are designed to help computer scientists and programmers create highly-intelligent computer systems that can learn for themselves when fed large amounts of data. Such systems will be vital to power technologies such as autonomous cars and AI-powered cancer detectors in the future. 

The company claims that its IPUs will allow researchers to develop new forms of AI quicker and more efficiently than today's graphics processing units (GPUs) and central processing units (CPUs), which have been around for decades and struggle to process the large quantities of data used in AI development.

"We're developing a chip but also building that into a system that goes into servers and cloud infrastructure as well," CEO Nigel Toon told Business Insider on Thursday. "It is a system to accelerate machine learning."

Demis HassabisThe funding round was led London-based by venture capital firm Atomico, which was set up by Skype billionaire Niklas Zennstrom in 2006.

Siraj Khaliq, the Atomico partner who will join the Graphcore board of directors, said in a blog post that "software relies on hardware to deliver its potential. And when it comes to AI, current hardware is proving a poor fit for the task."

A number of high-profile angels also participated in the round including: Google DeepMind cofounder Demis Hassabis; Uber chief scientist Zoubin Ghahramani, who is also a professor at the University of Cambridge; and the cofounders of Elon Musk's AI research firm, OpenAI. The company was also backed by corporates like Samsung, Dell, and Bosch in a series A funding round last year. 

"Having them on boards and supporting us is a very strong validation," said Toon. "They have that insight into not only what is happening in machine learning today, but they're the innovators on what is going to come next. It gives us the insight into where the technology is going and how our IPU technology can help."

Toon said the money will be used to help Graphcore scale up its team from 60 people today to 120 people by the end of next year, and to help it plan its next generation of products.

"Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalise this learning across a wide range of tasks," said Hassabis in a statement. "This requires a lot of processing power, and the innovative architecture underpinning Graphcore's processors holds a huge amount of promise."

Greg Brockman, cofounder and CTO of Open AI, added: "Training machine intelligence models in minutes rather than days or weeks will profoundly transform how developers work, how they experiment and the results they will see. Being able to experiment across a much broader front, at a much faster pace will create new breakthroughs and will allow us to combine many machine intelligence techniques to jumpstart progress."

Join the conversation about this story »

NOW WATCH: We drove a brand-new Tesla Model X from San Francisco to New York — here's what happened

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>