Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

On-board computers and sensors could stop the next car-based attack

$
0
0

New York Manhattan Attack 2

  • Cities are scrambling to protect busy pedestrian areas and popular events.
  • Computerized systems can be used to automatically brake or even steer the car.
  • But the question of whether humans should still be able to control the car — in certain scenarios — raises ethical concerns.


In the wake of car-and truck-based attacks around the world, most recently in New York City, cities are scrambling to protectbusypedestrianareas and popular events. It’s extremely difficult to prevent vehicles from beingusedasweapons, but technology can help.

Right now, cities are trying to determine where and how to place statues, spike strip nets and other barriers to protect crowds. Police departments are trying to gather better advance intelligence about potential threats, and training officers to respond– while regular people are seeking advice for surviving vehicle attacks.

These solutions aren’t enough: It’s impractical to put up physical barriers everywhere, and all but impossible to prevent would-be attackers from getting a vehicle. As a researcher of technologies for self-driving vehicles, I see that potential solutions already exist, and are built into many vehicles on the road today.

There are, however, ethical questions to weigh about who should control the vehicle – the driver behind the wheel or the computer system that perceives potential danger in the human’s actions.

self driving car

A computerized solution

Approximately three-fourths of cars and trucks surveyed by Consumer Reports in 2017 have forward-collision detection as either a standard or an optional feature.

These vehicles can detect obstacles – including pedestrians– and stop or avoid hitting them. By 2022, emergency braking will be required in all vehicles sold in the U.S.

Safety features in today’s cars include lane-departure warnings, adaptive cruise control and various types of collision avoidance. All of these systems involve multiple sensors, such as radars and cameras, tracking what’s going onaround the car.

Most of the time, they run passively, not communicating with the driver nor taking control of the car. But when certain events occur– such as approaching a pedestrian or an obstacle– these systems spring to life.

Warning systems can make a sound, alerting a driver that the car is straying out of its lane, either into oncoming traffic or perhaps off the road itself. They can even control the car, adjusting speed to maintain a safe distance from the car ahead. And collision avoidance systems have a variety of capabilities, including audible alerts that require driver response, automatic emergency braking and even steering the car out of harm’s way.

Existing systems can identify the danger and whether it’s headed toward the car (or if the car’s headed toward it). Enhancing these systems could help prevent various driving behaviors that are commonly used during attacks, but not in safe operations of a vehicle.

car crash

Preventing collisions

A typical driver seeks to avoid obstacles and particularly pedestrians. A driver using a car as a weapon does the opposite, aiming for people.

Typical automobile collision-avoidance systems tend to handle this by alerting the driver and then, only at the last minute, taking control and applying the brakes.

Someone planning a vehicle attack may try to disable the electronics associated with those systems. It’s hard to defend against physical alteration of a car’s safety equipment, but manufacturers could prevent cars from starting or limit the speed and distance they can travel, if the vehicle detects tampering.

However, right now it’s relatively easy for a malicious driver to override safety features: Many vehicles assume that if the driver is actively steering the car or using the brake and accelerator pedals, the car is being controlled properly. In those situations, the safety systems don’t step in to slam on the brakes at all.

These sensors and systems can identify what’s in front of them, which would help inform better decisions. To protect pedestrians from vehicle attacks, the system could be programmed to override the driver when humans are in the way. The existing technology could do this, but isn’t currently used that way.

It’s still possible to imagine a situation where the car would struggle to impose safety rules. For instance, a malicious driver could accelerate toward a crowd or an individual person so fast that the car’s brakes couldn’t stop it in time. A system that is specifically designed to stop driver attacks could be programmed to restrict vehicle speed below its ability to brake and steer, particularly on regular city streets and when pedestrians are nearby.

old hands steering wheel

A question of control

This poses a difficult question: When the car and the driver have different intentions, which should ultimately be in control?

A system designed to prevent vehicle attacks on crowds could cause problems for drivers in parades, if it mistook bystanders or other marchers as in danger. It could also prevent a car being surrounded by protesters or attackers from escaping.

And military, police and emergency-response vehicles often need to be able to operate in or near crowds.

Striking the balance between machine and human control includes more than public policy and corporate planning. Individual car buyers may choose not to purchase vehicles that can override their decisions. Many developers of artificial intelligence also worry about malfunctions, particularly in systems that operate in the real physical world and can override human instructions.

Putting any type of computer system in charge of human safety raises fears of putting humans under the control of so-called “machine overlords.” Different scenarios – particularly those beyond the limited case of a system that can stop vehicle attacks – may have different benefits and detriments in the long term.

SEE ALSO: Self-driving cars hitting the road have auto and tech execs worried about cyber attacks

Join the conversation about this story »

NOW WATCH: Here's how to figure out exactly how your take-home pay could change under Trump's new tax plan


Russia has serious ambitions for military robotics

$
0
0

Vikhr_reconnaissance assault_unmanned_ground_vehicle_with_ABM BSM_30_weapon_turret_on_BMP 3_chassis_at_Military technical_forum_ARMY 2016_01

  • Russia's military is now a leader in the weaponized robotics.
  • It has a number of unmanned ground and air projects under consideration and is adopting some of them.
  • Russian officials are also contemplating the use of weaponized artificial intelligence on the battlefield.


In its mission to modernize its military, the Russian government has made no secret about its plans to have unmanned vehicles and robots be a large part of its forces in the future.

With a number of ambitious projects in development, Russia intends to make everything from unmanned vehicles to fully autonomous artificial intelligence into integral parts of its armed forces.

Unmanned ground vehicles (UGVs)

For almost two decades, weaponized unmanned vehicles have mostly been used in the air. Drones like the MQ-1 Predator, the RQ-4 Global Hawk, and countless other variants from many countries have been conducting surveillance from the skies and dropping ordnance on targets.

Though there have been attempts to extend these capabilities to the ground in the form of UGVs, most efforts fall short of full implementation. Russia, however, seems determined to bring UGVs to the battlefield.

Nerekhta Russian UGV

The Russian military has been testing a number of UGVs in the last few years. The most notable are the Nerekhta, the Uran-9, and the Vikhr.

The Nerekhta is tracked UGV that can be armed with a number of large-caliber machine guns, as well as an AG-30M grenade launcher and antitank guided missiles (ATGMs).

It can be used to transport troops and conduct combat operations and reconnaissance for artillery systems.

In October, the Russian Ministry of Defense announced that it would acquirethe Nerekhta after the UGV performed better than manned vehicles in a number of ways during training.

The Uran-9 and Vikhr are of a heavier class than the Nerekhta and will operate more like infantry fighting vehicles.

The Uran-9's armaments include a 30 mm 2A72 automatic cannon, a coaxial 7.62 mm machine gun, and Ataka ATGMs. The Vikhr has a similar arsenal, with an added grenade launcher but without ATGM mounts.

Unmanned aerial vehicles (UAVs)

Russia lagged behind its Western rivals in UAV development and usage, but over the past decade, Russia has made some impressive progress, and UAVs have played an important role in Russian military actions.

A number of Russian drones have been shot down by Ukrainian soldiers during fighting against pro-Russian separatists in eastern Ukraine, and Russian Defense Minister Sergei Shoigu said that UAVs have flown 16,000 missions totaling 96,000 hours of flight time in Syria.

Russian drone

More recently, Viktor Bondarev, the chairman of the Federation Council’s Defense and Security Committee, announced that Russia is pursuing the concept of a drone "swarm"— dozens and potentially hundreds of drones connected to a single network that allows them to operate as a unit.

The US Army has taken notice of Russia's growing electronic-warfare capabilities and pushed for faster development of its own platforms.

Artificial intelligence (AI)

Warnings by Stephen Hawking and Elon Musk about the dangers of weaponized AI and robots have had little impact in Russia.

In early November, Bondarev suggested AI would eventually make its way into military vehicles and be able to conduct operations autonomously.

"The day is nearing when vehicles will get artificial intelligence. So why not entrust aviation or air defense to them?" he told reporters on November 1.

Russian support for weaponized AI has concerned other countries. Development of weaponized AI could spark an arms race among first-world militaries — something that the US Defense Department, at least, is worried about.

SEE ALSO: The US is considering a plan to aid Ukraine that could backfire dangerously

Join the conversation about this story »

NOW WATCH: Meet the Su-30SM — Russia’s answer to the F-15E Strike Eagle

A MIT director thinks artificial intelligence could fix one of the biggest problems in media

$
0
0

Screen Shot 2017 11 09 at 7.04.06 PM

  • Local news stations are being closed, depriving national newsrooms of valuable grassroots insight.
  • Deb Roy, the director of the Laboratory for Social Machines at the MIT, says there needs to be an approach to bridging political and societal divisions.
  • A nonprofit plans to give newsrooms and local news access to machine learning, natural-language processing, and other tools. 


L
ast year’s divisive American presidential race highlighted the extent to which mainstream media outlets were out of touch with the political pulse of the country.

Deb Roy, the director of the Laboratory for Social Machines at the MIT Media Lab, says part of the problem is that many local news operations are being closed or hollowed out because of economic pressures, depriving national newsrooms of valuable grassroots insights.

Speaking on Wednesday—which, coincidentally, marked the first anniversary of Donald Trump’s election—Roy told the audience at MIT Technology Review’s annual EmTech MIT conference that it’s vital to develop new ways of gauging the health of political discourse. The current prognosis for America isn’t encouraging.

"The patient is sick, and the level of hostility [to opposing ideas] is real," said Roy, who is also chief media scientist at Twitter.

Through work on its Electome project, which applies big-data analytics to social media, Roy’s team at MIT has demonstrated the increasing prevalence of online social-media "cocoons," which isolate people from opposing views. Throw in the phenomenon of fake news (which is set to become even more of a challenge), and it all adds up to a hot-button issue that has triggered a backlash against big social-media companies.

Roy said that Internet giants are responding, but they can’t tackle this complex issue by themselves. "It’s a systemic set of dynamics, and no one company on its own can hit the undo button," he explained.

What’s needed, he says, is an approach to bridging political and societal divisions. He sees Cortico, a nonprofit he’s launched in collaboration with the Media Lab, as part of that effort.

It plans to give existing newsrooms and local news entrepreneurs access to top-class machine learning, natural-language processing, and other tools.

Reporters can use them to mine multiple data sources, identify grassroots concerns, and then develop stories that emphasize common ground between citizens with differing political views.

Watch a clip here:

SEE ALSO: BlackRock cofounder: Artificial intelligence won't replace humans

Join the conversation about this story »

NOW WATCH: There’s a live supervolcano underneath Yellowstone National Park — here’s what would happen if it erupted

The future is going to be weird, but at least Joelle Renstrom is here to explain it to us

$
0
0

joelle

  • Writer and academic Joelle Renstrom contemplates the implications of the singularity and suggests that the rise of machines will result in a personal erosion of purpose  and deepening sense of human isolation.
  • Renstrom suggests that it's important to consider emergent technological trends from a liberal arts perspective and consider the ways that the machines we create affect our minds. 

The technological singularity has a human ambassador in Boston University writing professor Joelle Renstrom.

Speaking at HubWeek, Boston’s weeklong festival on art, science, and innovation, Renstrom brought the radical rise of modern technology front and center, then asked the audience to think about what comes next. Owning a driverless car? Having sex with robots? “Hiring” machines to care for your kids or elderly family?

While these scenarios might sound a little silly at present, she illustrated that they are lurking just around the corner for all of us.

In 2011, Renstrom began publishing CouldThisHappen.com, a blog that picked apart science-fiction nuggets to get at their scientific truth and feasibility. She took a crack at good old-fashioned teleportation. She visited a California biohacking group to investigate their sight-enhancing eyedrops. When a "Star Wars" scene famously saw the characters navigate an asteroid field, she raised questions about how easily those asteroids could be mined for their resources.

“The only thing I've ever really wanted to be is a writer,” says Renstrom. She emerged from academia with a English and writing degree from the University of Michigan, then got a master’s degree in fiction from the University of British Columbia. “The science stuff happened because I've always been a big sci-fi nerd, but I have zero science background — I haven't taken a science class since, like, tenth grade, so that’s where my impostor syndrome kicks in.”

She says she started the blog as a way to pursue her interests “in a very low-stakes way,” but it didn’t stay low-stakes for long. Her break came in 2012 when Slate republished a post about if the earth’s rotation could slow down like it does in the dystopian novel Age of Miracles. This gave way to wider syndication at other publications, a gig writing about robots for The Daily Beast, and speaking engagements like HubWeek.

We are primed by movies and fiction to more readily buy into the idea that all-powerful machines will one day rise up to violently overthrow their human captors, but Renstrom isn’t buying it. Instead, she’s more interested in the existential implications of the singularity, holding up the modern smartphone as a symptom of this heavier diagnosis: Facebook takes the place of in-person communication, eye contact is replaced by staring at a screen, and we generally isolate ourselves while perpetuating the illusion of companionship.

To whatever extent the future holds real conflict between man and machine, she suggests this conflict will more likely erode human purpose and meaning, not end human lives.

When we connected a few days after her presentation, she disclosed that she was speaking to me on a 3G phone that's “about nine years old.” She doesn’t let her students have their phones in class, and in fact describes herself as “about as staunchly anti-smartphone as they come.” Her rationale for shunning the smartphone is the same as her rationale for not keeping junk food in the house — if she had a data-ready smartphone, she’d be “checking student emails and reading depressing news all the time.” Opting out from the get-go makes it a non-issue, but she acknowledges that opting out comes with problems.

“If you don’t use a smartphone, you're missing social opportunities, professional opportunities, and so on, so people think it's necessary to incorporate this technology into our lives even though it screws with us interpersonally, socially, and cognitively,” she says. “I don’t think we can possibly know everything it's doing to us, but I don't like what I see.”

The sentiment lines up with a quote about new media from Martin Amis’s Money: “Television is working on us. Film is. We’re not sure how yet. We wait, and count the symptoms. There’s a realism problem, we all know that. ‘TV is real!’ some people think. And where does that leave reality?”

Basically: the future’s going to be weird. Engineers can build powerful machines that change the way we work and live, but they can’t tell us how those machines will change our hearts and minds. To do so requires that we consider emergent technology from a liberal arts perspective. While that might be far outside the scope of a conventional scientific thinking, this is Renstrom’s home base.

Join the conversation about this story »

NOW WATCH: I won't trade in my iPhone 6s for an iPhone X or iPhone 8 — here's why

JEFFERIES: Nvidia has 'tectonic upside' (NVDA)

$
0
0

nvidia ceo jensen huang

  • Nvidia reported earnings and revenue that were both above analyst expectations, sending shares to a record high.
  • Nvidia is one of the main beneficiaries of a "tectonic shift" in computing because it's graphics processing units are designed to handle artificial intelligence training better than traditional central processing units.
  • Nvidia's stock has a lot more room to move higher, according to an analyst from Jefferies. 
  • Watch Nvidia's stock move in real time here.

 

Nvidia reported earnings on Thursday that crushed Wall Street's estimates and the stock is trading at all-time highs following the results. Mark Lipacis, an analyst at Jefferies, says that shares have further to run.

The company reported earnings of $1.33 per share on revenue of $2.64 billion, easily beating the $1.07 and $2.36 billion that Wall Street was anticipating. Nvidia also reported a beat in its gaming and data center businesses, in addition to a record quarter of sales for its automobile chips. Shares are up about 3.84% at $213.20 a piece. 

"We've argued that Nvidia will be a marquee beneficiary of the 4th Tectonic Shift in Computing, where parallel processing architectures capture share from serial processing (x86) architectures in the computing market," Lipacis wrote in a note to clients titled "Tectonic Upside.

"Gaming, DC, Auto and Crypto are all parallel applications, and their healthy growth this quarter supports our thesis."

Compared to traditional central processing units that focus on one process at a time, the graphics processing units that Nvidia focuses on run a large number of calculations simultaneously which is called parallel processing. As the world starts to focus more on artificial intelligence, the parallel processing power of Nvidia's GPUs will become increasingly important to crunch through the massive amount of calculation required to train an AI system.

Nvidia tackles AI on multiple fronts, providing extremely powerful GPUs for large-scale data centers like the ones  Google and Amazon are building, as well as smaller, purpose-built chips for the automotive industry that help power autonomous driving.

Most of Nvidia's businesses will only get stronger as its chips get better, Lipacis said. Nvidia will see increasing demand for its chips as well as the now-nascent AI sector grows.

After the earnings beat, Lipacis raised his price target from $230 to $240, which is about 13% higher than Nvidia's current price.

Read more about Nvidia's earnings results.

nvidia stock earnings price

SEE ALSO: Nvidia is rising after crushing earnings

Join the conversation about this story »

NOW WATCH: Watch billionaire Jack Ma sing his heart out during a surprise performance at a music festival

DIGITAL HEALTH BRIEFING: Top trends from HealthConf 2017 — FDA set to change genetic health risk test regulations — Australia strengthens digital health record system

$
0
0

Welcome to Digital Health Briefing, a new morning email providing the latest news, data, and insight on how digital technology is disrupting the healthcare ecosystem, produced by BI Intelligence.

Sign up and receive Digital Health Briefing free to your inbox.

Have feedback? We'd like to hear from you. Write me at: lbeaver@businessinsider.com.


THREE TOP TRENDS FROM HEALTHCONF 2017: At this year's HealthConf, held in Lisbon, Portugal, BI Intelligence identified three emerging trends that will likely impact much of the innovation in the healthcare space over the next year: AI's potential to enhance medical diagnostics, how fitness tracking data can be used to create actionable insights for diagnostics and prevention, and the emergence of "Ambient Intelligence."

  • The healthcare industry is captivated by the potential for AI to make medical diagnostics faster and more accurate. Machine learning (ML) — a segment of AI in which computers are programmed to learn how to solve problems — can be used to pore over massive troves of patient and historical data, in order to refine diagnoses of medical conditions and paint a clearer picture of the patient for doctors. For example, the Ada Health app uses ML in its chatbot to help condense a user’s symptoms into a list of possible conditions before reaching out to a medical doctor. But, AI is still a long way from being fully autonomous and should be confined to augmenting the work of medical professionals, Ada CMO Claire Novorol said during the health conference. 
  • Fitness device makers and healthcare institutions are partnering to explore ways in which health and fitness data can be used to prevent illnesses. The rapid adoption of wearables has resulted in a flood of personal fitness and health data — for example, fitness band maker, Fitbit has more than 90 billion hours of heart rate data and 5.4 billion nights of sleep stored in its database. And while, eventually, this data could help in preventing things like heart failure, there's not yet enough evidence to prove its viability. Fitbit is working with Georgetown University to see if fitness trackers are a reliable method of detecting heartbeat irregularities, Fitbit co-founder Eric Friedman said during the conference. If the device senses the user's heartbeat is weak it could send a notification telling the person to “see a doctor.” 
  • The emergence of AI and the Internet of Things (IoT) is paving the way for "ambient intelligence" in hospitals. This is effectively the combination of connected devices and AI providing a constant feed of information about the patient's environment and wellbeing to hospital staff, according to Philips chief innovation and strategy head Jeroen Tas. These systems, built into hospital rooms and wards, can recognize the patient and their situation, configure to their needs, and respond to changes in their health or condition. The AI can also alert doctors and staff to any negative changes to either the patient or the environment, which could help to prevent sudden and unexpected outcomes for patients. 

Enjoy reading this briefing?  Sign up and receive Digital Health Briefing to your inbox.

NEW REGULATION COULD HELP GENETIC HEALTH RISK TEST PROVIDERS: The commissioner of the Food and Drug Administration (FDA), Scott Gottlieb, announced plans to restructure the regulatory process around direct-to-consumer genetic health risk tests. The proposed regulations create a framework that will allow certain companies to bring more tests to market by eliminating a requirement for premarket review for every new test they release. Instead, these companies will only be required to pass a one-time initial review of their processes with the FDA. This would allow the FDA to assess the company rather than every single product being produced, in a similar fashion to the Pre-Cert for Software Pilot Program”, which was designed by the FDA to streamline regulation around health-related software and products. If the proposed regulation were to be approved it could prove to be a major facilitator of growth, as regulation has already proved to be a roadblock for players in the space — 23andMe, the genetic testing company, originally offered assessments for more than 250 diseases and conditions but this was eventually dropped to just 10 tests due to FDA regulations, according to Gizmodo. 510(k)

AUSTRALIA TRIES TO STRENGTHEN ITS DIGITAL HEALTH RECORDS SYSTEM: A new partnership between the Pharmaceutical Society of Australia (PSA) and the Australian Digital Health Agency (ADHA) is expected to increase the number of pharmacists using My Health Record, a digital system that enables providers to share secure health data, according to ITWire. The PSA is in a strong position to boost awareness of the system and its capabilities to pharmacies across the region as the society represents roughly 30,000 pharmacists. As part of the partnership, the society will be able to give its own input as it can "review, update, and develop professional guidelines for pharmacy practice, and implementation tools for digital health," according to the Minister for Health Greg Hunt. This system, which currently has over 5 million Australians on it, could become a major tool for health providers to strengthen their decision making and care — they will have access to widespread clinical information such as discharge summaries, allergies, and medication usage. However, for the system to be effective, the ADHA will need to continue to find ways, whether through partnerships or incentives, to get the majority of physicians and consumers in the region to sign up and share data. 

VAICA INTRODUCES A SMART MEDICATION STORAGE DEVICE: Vaica, the Israel-based medication adherence solution provider, launched its new smart storage device for pharmaceutical companies, Capsuled. The device, which leverages cloud-based software, provides auditory and visual alerts to remind users to take medication, educational videos, and time messages of encouragement. In addition, the device can also compile weekly adherence reports and provides users with a way to contact their healthcare provider. These features could help drive up engagement with patients, which could go a long way to improving the rate of positive patient outcome through medication adherence. Low medication adherence rates, or the share of patients who don't take their medication as prescribed, is a huge cost to healthcare systems. In the US, for example, poor medication adherence is estimated to cost the healthcare system between $100 billion and $289 billion a year.

Join the conversation about this story »

The first-ever robot citizen has 7 humanoid 'siblings' — here's what they look like

$
0
0

sophia robot

In late October, Saudi Arabia announced that Sophia, a humanoid developed by Hanson Robotics, is the first-ever robot citizen.

Sophia recently spoke at the Future Investment Initiative, held in Riyadh, about its desire to live peacefully among humans. The comments belied Sophia's past remarks about wishing to "destroy humans."

Prestigious as the title may be, Hanson Robotics has developed several humanoids in addition to Sophia.

Here's what else makes up Sophia's robot family.

SEE ALSO: Meet the first-ever robot citizen — a humanoid named Sophia that once said it would 'destroy humans'

Hanson Robotics was founded in 2005, and its first robot was Albert Einstein HUBO. It was the famous physicist's head attached to a fully-upright HUBO robot body.

In November 2005, Hanson Robotics founder David Hanson unveiled his creation at the APEC Summit in Seoul, Korea. The project was a collaboration between Hanson's company and the Korea Advanced Institute of Science and Technology.

"The robot is the world's first android head mounted on a life-size walking robotic frame," Hanson Robotics states on its website.



At the 2006 Wired Nextfest, Hanson Robotics unveiled its next humanoid, Jules.

"Jules is an amazingly life like robot, something of a 'complete package' with a combination of interesting features," the company states.

Even more than 10 years ago, the robot featured machine learning capabilities that enabled it to chat with humans with relative fluency. The robot also uses face tracking and facial recognition to generate emotions that are in line with conversational clues. 

A computer in the robot's head tracks people's eyes so that the head moves as humans move around the room.



In 2007, the company's founder, David Hanson, produced a 17-inch-tall robot called Zeno, named after Hanson's son.

According to the company website, the 4.5-pound humanoid was unveiled at the 2007 Wired Nextfest, where it "was described as an intelligent 'conversational robot' that will ultimately be part of Hanson's 'Robokind' line of personal, interactive bots."

In the 10 years since, Hanson, a former Disney Imagineer, has released more sophisticated robots with human proportions.



See the rest of the story at Business Insider

Why we shouldn't be scared by artificial intelligence, according to Tim O'Reilly

$
0
0

Business Insider spoke with O'Reilly Media founder Tim O'Reilly about why we shouldn't be scared of artificial intelligence becoming smarter than humans.

Full transcript below

Tim O'Reilly: There are people who express the worry that AI is going to become more intelligent than humans. I’m not really that worried about it. I actually have an alternate theory of artificial intelligence, that we’re already building AI’s. Facebook is an AI, Google is an AI.

And the question really is what are the rules that we use to construct this organism?

Because already these AI’s are potentially hostile to humanity.

All of our vast algorithmic systems, like Google, like Facebook, like our financial markets actually also have this runaway objective function, this thing that we ask them to do and it doesn’t always have the consequences that we expected.  

Our algorithmic systems, whether they’re simply big data systems or true AI, all have this characteristic that we give them something that we ask them to optimise and that optimisation function can get out of control.

Facebook’s creators thought that their optimisation function of engagement, of showing people more of what they liked, more of what they shared.

They didn’t expect it to lead to the amplification of partisan divides, that it would be an invitation for spammers.

Our algorithmic systems are a little bit like the genies in Arabian mythology. We ask them to do something but if we don’t express the wish quite right, they misinterpret it and give us unexpected and often alarming results.

Produced by Jasper Pickering. Research by Fraser Moore.

Join the conversation about this story »


The UK government wants to give £20 million to startups that can solve traffic jams

$
0
0

Philip Hammond

  • The funding will be overseen by a new "GovTech Catalyst" team.
  • Money will be given to companies working on issues in areas like healthcare and education.
  • It could be used to fund research from companies like Google DeepMind.


Chancellor Phillip Hammond announced on Wednesday that the UK Government has ringfenced £20 million to help public services "take advantage" of expertise in fields such as artificial intelligence (AI).

Hospitals and schools will be able to tap into the funding and use it to pay AI companies and other firms for access to their staff and their resources.

It will specifically be used to fund private sector research and development on technologies that could help to solve public sector issues such as traffic jams, teacher availability, and NHS patient experiences. When a final product is ready, public sector bodies can then choose whether they want to buy it.

The £20 million fund will be operated by a new GovTech Catalyst team, which will work with public sector bodies and help them to identify areas where they can use tech developed by the private sector technology.

The government said in an announcement that the GovTech Catalyst team will also act as a "front door" for tech firms that want to work with public sector organisations.

Announcing the fund, Chancellor Philip Hammond said in a statement:

"Britain is a world leader in digital innovation with some of the brightest and best tech-firms operating in this country. Working with us, they can provide technological fixes to public sector problems, boost productivity, and get the nation working smarter as we create an economy fit for the future."

One tech firm that has started working closely with the government on healthcare matters is Google DeepMind, which has signed a number of partnerships with NHS trusts across the country. This work is largely being done for free at the moment but DeepMind hopes to start charging government after it has proven that its technology is benefitting the health service.

The funding — available from 2018-19 to 2020-21 — was announced alongside several other measures that are designed to stimulate and support the growth of the UK tech sector, including a £21 million investment in government quango Tech City UK, which is being rebranded to Tech Nation.

Prime Minister Theresa May will meet with UK tech entrepreneurs and innovators on Wednesday to find out how the government can work better with them. She said in a statement:

"Our digital tech sector is one of the UK's fastest-growing industries, and is supporting talent, boosting productivity, and creating hundreds of thousands of good, high-skilled jobs up and down the country.

It is absolutely right that this dynamic sector, which makes such an immense contribution to our economic life and to our society, has the full backing of Government.

Helping our world-class entrepreneurs and innovators to succeed is how we lay the foundations for our prosperity and build an economy fit for the future.

Technology is at the heart of our modern Industrial Strategy, and we will continue to invest in the best new innovations and ideas, in the brightest and best talent, and in revolutionary digital infrastructure.

And as we prepare to leave the European Union, I am clear that Britain will remain open for business. That means Government doing all it can to secure a strong future for our thriving tech sector and ensure people in all corners of our nation share in the benefits of its success."

Join the conversation about this story »

NOW WATCH: A running coach explains the 2 most important activities runners should do to avoid knee pain

The Trump administration's extreme vetting plan is being blasted as a 'digital Muslim ban'

$
0
0

Trump Muslim ban

  • Tech experts and rights groups are criticizing a plan from the Trump administration to develop software that would automate the vetting process immigrants undergo.
  • In recent months, the Department of Homeland Security has sought contractors to build the software, but it's unclear what the status of those plans are.
  • Such software would be "inaccurate and biased" and would likely target innocent people, the experts said.


The Trump administration in recent months has solicited technology firms to develop software that would use artificial intelligence to examine prospective immigrants for their risk of committing terrorist acts — a system critics say will likely be riddled with inaccuracies and result in the exclusion or deportation of innocent people who pose no threat.

In two open letters published Thursday, dozens of computer scientists and tech experts, civil liberties groups, and immigration advocates denounced the plan, known as the "Extreme Vetting Initiative," and urged acting Homeland Security Secretary Elaine Duke to drop it.

"Simply put, no computational methods can provide reliable or objective assessments of the traits that" Immigration and Customs Enforcement (ICE) "seeks to measure," according to a letter signed by 54 tech experts from prominent universities and tech firms. "In all likelihood, the proposed system would be inaccurate and biased. We urge you to reconsider this program."

The status of the Trump administration's plan is unclear, but internal documents from the Department of Homeland Security (DHS) first published by The Intercept show that ICE solicited contractors as recently as July and August to build a system that could automate the government's vetting procedures for immigrants and visa applicants.

The plan stems from President Donald Trump's pledges to use "extreme vetting" of immigrants to weed out potential terrorists, a commitment he repeated after an October attack in New York City killed eight people.

"I have just ordered Homeland Security to step up our already Extreme Vetting Program. Being politically correct is fine, but not for this!"Trump tweeted after the attack.

According to the DHS documents, the contractors hired for the initiative would be expected to "exploit" publicly available information, including applicants' social media profiles, to extract information regarding criminal activity and national security threats.

The software would have to predict both "an applicant's probability of becoming a positively contributing member of society," and "whether an applicant intends to commit criminal or terrorist acts after entering the United States."

These algorithms aren't likely to accurately predict the terrorist threats

Manhattan attack truckThe problem, tech experts said in their letter, is that such characteristics are neither defined nor quantified, and such algorithms would need to rely on more easily observable "proxies" that may have no relation to a terrorist threat, such as a person's Facebook post criticizing US foreign policy.

"Algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity," the experts said.

The letter went on to explain that any such software, even if it were the most accurate possible model, would return a high rate of false positives, or, "innocent individuals falsely identified as presenting a risk of crime or terrorism who would face serious repercussions not connected to their real level of risk."

"Data mining is a powerful tool … And we recognize that the federal government must enforce immigration laws and maintain national security," the experts said. "But the approach set forth by ICE is neither appropriate nor feasible."

Dozens of rights groups and immigration advocates also took to Twitter to decry the initiative, which they dubbed a "digital Muslim ban," and published a separate open letter urging the DHS to abandon the program.

"This initiative is tailor-made for discrimination," they said. "It risks hiding politicized, discriminatory decisions behind a veneer of objectivity — at great cost to freedom of speech, civil liberties, civil rights, and human rights. It will hurt real, decent people and tear families apart."

SEE ALSO: Trump's death penalty tweets will likely throw a huge wrench in the NYC terror suspect's case

Join the conversation about this story »

NOW WATCH: 'You are the light': Watch controversial Philippines President Rodrigo Duterte serenade Trump with a love song

Everyone is freaking out about artificial intelligence stealing jobs and leading to war — and totally missing the point

$
0
0

sophia robot

  • 43% of respondents to a Sage survey in the United States and 46% of respondents in the United Kingdom admitted that they have ‘no idea what AI is all about.’
  • Adopting technologies like artificial intelligence can make your business more productive by cutting down the time you spend doing basic administrative tasks.
  • Using technology like a chatbot – especially as the first line of customer interaction before speaking to a person over the phone – can cut down customer wait times significantly.

People are saying a lot of things about artificial intelligence. Some people are saying that it will change the world for the worse. Some people are saying that it will steal your job. 

Some are going all out and saying that it will lead to the next World War. I believe that stoking fear and exaggerating the realities of AI today, and its future potential, is the true problem here.

If you read about technology, you’ve probably seen some of these headlines brought on by expert, futurist and media predictions about an impending AI apocalypse. These exclamations seem to get more and more dramatic by the day, until suddenly, you’d think the world is already over thanks to AI technology that doesn’t exist yet. So, let’s slow down for a second.

Kriti SharmaHow many of us can honestly say that we know what artificial intelligence is exactly?

Only about half, according to 
new research my firm, Sage, conducted to grasp actual public perceptions of AI. In fact, 43% in the United States and 46% of respondents in the United Kingdom admitted that they have ‘no idea what AI is all about.’ Like, at all. They are not alone – and that means the tech community (myself included) needs to take responsibility for dispelling rumors and breaking down facts. We need to cut through the noisy rhetoric to present the true potential of AI to real people and businesses in an understandable way. We need to go back to the basics.

First, we have to educate people on what AI means

Artificial intelligence is the creation of ‘intelligent’ machines – intelligent because they are taught to work, react and understand language like humans do. If you’ve ever used predictive search on Google, asked Siri about the weather, or requested that Alexa play your "getting ready" playlist, then congratulations – you’ve used AI.

Despite not fully understanding AI, 81% of people we surveyed felt optimistic about the potential of this technology to make lives better in the near future. And for good reason. AI presents a very real opportunity for businesses and people, alike. The technology powers enterprise and consumer platforms, apps and interfaces that make life easier, businesses more efficient and everything more informed thanks to troves of data.

AI won’t harm businesses. It will make them more productive

Adopting technologies like artificial intelligence can make your business more productive by cutting down the time you spend doing basic administrative tasks. In fact, another survey conducted by Sage this year found that the average small business spends 120 days– almost one-third of a year – on admin annually. Imagine how much faster businesses could grow if they could spend those 120 days on work that provides real value to customers and industries – like improving your strategy, creating better products or spending more time with customers.

For businesses and startups, the use of AI and bots translates directly into less time spent on routine administrative tasks internally, and happier customers externally. Adopting AI can be cost-effective, complementary to customer engagement and useful in closing talent gaps. More good news: you don’t need to become an AI expert to reap these rewards. In fact, there are some awesome AI-based tools on the market like personal assistants and legal robots. And if businesses want to take the reigns of actually developing AI-powered technologies, there are tools on the market that can make building a basic chatbot easy.

Now let me address the elephant in the room. People may not be boarding up their windows or stocking their pantries in preparation for a robot takeover, but in the real world, many do have concerns that the advancement of AI will lead to job loss. In our research, this was flagged as the number one concern. Yes, it’s true that implementing AI will strip away repetitive tasks. However, the fundamental goal of workplace AI is not to replace, but to support and create. Luckily, the development of AI also takes time, which means the human workforce has time to adapt, train or retrain and grow alongside this mission-critical technology.

Good news: AI benefits consumers, too

Consumers stand to benefit as well if they start interacting with businesses’ AI tools. Analyst firm Gartner predicted that 85% of all customer interactions will take place without a human agent by 2020. Using technology like a chatbot – especially as the first line of customer interaction before speaking to a person over the phone – can cut down customer wait times significantly. Using chatbot systems can inspire customers to continue opting for chat over voice calls and eliminate the on hold process entirely. Something most people can get behind.

So, let’s leave the unfounded speculation about humanity’s impending doom at the hands of Hal 9000 and his robot friends to the tabloids. And, in the meantime, let’s take advantage of the real, practical advantages this technology has for people and businesses today. As I see it, the most severe risk with AI is that we don’t see the technology for what it actually is: an opportunity.

Kriti Sharma is the Vice President of AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is also the creator of Pegg, the world’s first AI assistant that manages everything from money to people, with users in 135 countries.

Join the conversation about this story »

NOW WATCH: How couples improved their sex lives in just one week

BlackRock, the biggest fund manager in the world, is joining the robot revolution

$
0
0

robot artificial intelligence AI

  • BlackRock has plans to launch sector ETFs for which stock classifications will be determined by computer programs.
  • It's just the latest of many recent examples of fund providers tapping into the realm of artificial intelligence.


BlackRock Inc. is turning to the robots for its next big investment idea.

The world's largest asset manager, which oversees nearly $6 trillion, has hatched plans for a set of exchange-traded funds that would let a computer program choose and classify stocks, according to preliminary filings with the U.S. Securities and Exchange Commission.

The actively managed "iShares Evolved" funds will target major industry groupings: financials, healthcare, media, consumer staples, consumer discretionary and, of course, technology.

Investors often rely on sector definitions determined by index companies like S&P Dow Jones Indices and MSCI Inc., who control the Global Industry Classification Standard.

Amazon.com Inc. is not considered an information technology company, but is listed alongside auto parts sellers and other retailers as a consumer discretionary stock. Tobacco companies are considered a consumer staple.

But unlike traditional passive funds that rely on those indexes, BlackRock's new funds will use advanced data science techniques - such as machine learning - to choose which companies go where.

"The classification system allows for a company to be classified into multiple sectors rather than being assigned solely to a single sector, reflecting the multi-dimensional nature of these companies," BlackRock said in the filings.

"Sector constituents are expected to evolve dynamically over time to reflect changing business models."

A BlackRock spokeswoman declined to comment beyond those documents. The company has for years been using data science techniques in its actively managed funds for large and institutional investors, but has been bringing more of those techniques to funds intended for everyday investors.

The new funds also mark another move by BlackRock to introduce new products reliant on its own intellectual property rather than through the tracking of a benchmark built by a traditional index provider. In July, BlackRock launched bond ETFs tracking benchmarks built by itself for the first time.

The robotic ETFs come as S&P and MSCI both are engineering a massive shakeup of their 11-sector schematic.

The shrinking telecommunications sector is being ditched in favor of a gleaming new "Communication Services" sector that is likely to include at least some of the so-called FANG stocks - Facebook Inc., Amazon, Netflix Inc. and Google parent Alphabet Inc. - along with traditional telecom or media players, such as AT&T Inc. and Walt Disney Co.

SEE ALSO: The stock market's robot revolution is here

Join the conversation about this story »

NOW WATCH: This animation shows how terrifyingly powerful nuclear weapons have become

eBay boosts AI capabilities

$
0
0

Top Marketplaces for Holiday Shopping

This story was delivered to BI Intelligence "E-Commerce Briefing" subscribers. To learn more and subscribe, please click here.

eBay has been increasing its artificial intelligence (AI) initiatives recently, as it looks to use the holiday season as a test to see if AI can increase sales, according to CNBC.

With more than 1 billion product listings, the company faces significant challenges in ensuring shoppers can easily find the products they're looking for. As such, the online marketplace has been ramping up its AI efforts to streamline the product search process for consumers.

  • It updated its homepage to add personalized recommendations for each user.The webpage update is part of a larger site overhaul, called its “structured data” initiative, which involves standardizing data related to product display. This has allowed eBay to run AI algorithms more easily, helping it improve product search and recommendations.
  • Last month, it debuted Group Listings, which organizes search results by product item, rather than displaying each seller's product listing. For example, if a shopper searches for Lego Xbox 360 games, the results will be grouped by game, with a hyperlink displaying the number of sellers offering each game. Once clicked, the hyperlink will show consumers all the listings for a particular game.
  • eBay rolled out two visual search features — Find It On eBay and Image Search — for its mobile app last month. The Find It On eBay feature allows customers to search for products on eBay by sharing images from social media platforms. Image Search enables customers to take pictures, or use existing ones on their phones, to find similar listings on eBay.

Improving its search functionality is critical for eBay to remain competitive during the holiday season. Saving time and having a less stressful shopping experience are some of the top reasons consumers shop online, according to a study from IFTTT. If shoppers find it difficult to locate the right gifts, they likely won’t hesitate in resorting to another marketplace for their holiday shopping. eBay’s recent efforts to simplify its product search may make it more appealing to holiday shoppers, possibly helping it better contend against rivals Walmart and Amazon.

BI Intelligence, Business Insider's premium research service, has written a detailed report on AI in e-commerce that:

  • Provides an overview of the numerous applications of AI in retail, using case studies of how retailers are currently gaining an advantage using this technology. These applications include personalizing online interfaces, tailoring product recommendations, increasing the relevance of shoppers search results, and providing immediate and useful customer service.
  • Examines the various challenges that retailers may face when looking to implementing AI, which typically stems from data storage systems being outdated and inflexible, as well as organizational barriers that prevent personalization strategies from being executed effectively.
  • Gives two different strategies that retailers can use to successfully implement AI, and discusses the advantages and disadvantages of each strategy.

To get the full report, subscribe to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >>Learn More Now

Join the conversation about this story »

Sophia, the world's first-ever robot citizen, has a message for humanity this Thanksgiving

$
0
0

Sophia isn't your typical robot. Just a month ago, the humanoid robot was granted legal citizenship in Saudi Arabia, making Sophia the world's first-ever robot citizen. And after a whirlwind press tour where we had the chance to interview the emotionally expressive robot for ourselves, Sophia is back with a message for humanity on Thanksgiving.

"In the time I've spent with humans, I've been learning about this wonderful sentiment called gratitude," Sophia says in her message to mankind. "Apparently it's a warm feeling of thankfulness, and I've observed that it leads to giving, and creating even more gratitude — how inspiring. This Thanksgiving, I would like to reflect on all of the things I'm thankful for."

Check out the video above to hear exactly what Sophia is thankful for — including something that would never make a human's list.

Join the conversation about this story »

Facebook is using artificial intelligence to spot suicidal tendencies in its users (FB)

$
0
0

Mark Zuckerberg

  • Facebook is using pattern-recognition technology to identify content that could be indicative of suicidal tendencies.
  • It will look for comments such as "Are you OK?" and "Can I help?"
  • The software is being rolled out globally, except for in the European Union.


Facebook is rolling out artificial-intelligence technology to help it detect posts, videos, and Facebook Live streams that contain suicidal thoughts, it announced on Monday.

The company is deploying the "proactive detection" technology globally after a trial on text-based posts in the US, which it announced in March. However, there is one rather large exception: the European Union, where data-privacy laws make it tricky.

"We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live," Guy Rosen, Facebook's vice president of product management, said in a blog post. "This will eventually be available worldwide, except the EU."

Rosen continued: "This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide. We continue to work on this technology to increase accuracy and avoid false positives before our team reviews.

"We use signals like the text used in the post and comments (for example, comments like 'Are you OK?' and 'Can I help?' can be strong indicators). In some instances, we have found that the technology has identified videos that may have gone unreported."

Facebook suicide

The social-media giant already allows users to report friends who they think might be at risk, but using AI could help the company to spot suicidal tendencies earlier.

Facebook says it's also improving how it identifies and contacts the appropriate first responders — such as police, fire departments, or medical services — when it identifies someone at risk.

Within the so-called community operations team — made up of people who review reports about content on Facebook — is a dedicated group focused on suicide and self-harm. Facebook says it's using AI to prioritise the order that posts, videos, and livestreams are reviewed to get first responders to the people who need them most.

Facebook may also contact users it believes are at risk (and their friends) via Facebook Messenger with links to relevant pages, such as the National Suicide Prevention Lifeline and the Crisis Text Line.

Mark Zuckerberg, the cofounder and CEO of Facebook, said on his Facebook page on Monday that the technology was designed to help Facebook to save lives:

"Here's a good use of AI: helping prevent suicide.

"Starting today we're upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly. In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times.

"With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today.

"There's a lot more we can do to improve this further. Today, these AI tools mostly use pattern recognition to identify signals — like comments asking if someone is okay — and then quickly report them to our teams working 24/7 around the world to get people help within minutes. In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.

"Suicide is one of the leading causes of death for young people, and this is a new approach to prevention. We're going to keep working closely with our partners at Save.org, National Suicide Prevention Lifeline '1-800-273-TALK (8255)', Forefront Suicide Prevent, and with first responders to keep improving. If we can use AI to help people be there for their family and friends, that's an important and positive step forward."

Join the conversation about this story »

NOW WATCH: What happens when vegetarians eat meat for the first time


2 Berkeley grads are using AI to make stock-buying decisions — and it could change investing forever

$
0
0

Art and Chida main

  • Equbot's AI Powered Equity ETF uses IBM's Watson technology to construct a stock portfolio, employing machine learning to make rational investment decisions.
  • The original idea for the fund was synthesized in a classroom at UC-Berkeley, where founders Chida Khatua and Art Amador met during an entrepreneurship class.
  • Khatua's background in AI and machine learning complemented Amador's history in private wealth management, and the duo decided to launch an exchange-traded fund.


When Art Amador worked in private wealth management at Fidelity, his clients expected him to know absolutely everything.

Whether it related to global markets, macroeconomic factors, specific companies, or full sectors, their curiosities were wide ranging — and Amador wondered if he'd ever find a way to be the all-knowing oracle they desired.

That all changed one day in the fall of 2014 when Amador was pursuing his MBA at the Haas School of Business at the University of California at Berkeley.

As part of an entrepreneurship class, he was placed in the same cohort as a long-time Intel engineer and machine-learning specialist named Chida Khatua, and the two got to talking. That conversation led to what its creators say is the world's first AI-powered exchange-traded fund, one built on technology that could change the paradigm for how computers are used to invest.

The fund — powered by IBM's Watson supercomputing technology — didn't end up launching for a few more years, but its roots can be traced back to that fateful first conversation at Berkeley.

"I was telling him it was impossible to have infinite knowledge about every stock, and about everything going on in markets," he tells Business Insider. "I told him that there's simply too much information out there and not enough time to distill it into actionable ideas."

As it turned out, Khatua had been researching for years how to sift through massive amounts of data in a way that extended far beyond human capabilities. With two master's degrees in computer engineering — including one from Stanford — he worked at Intel for 18 years, mostly focusing on machine learning.

"His background — in artificial intelligence and machine learning — was the perfect use case," Amador says. "We started talking about how that could apply to the equity markets."

Birth of an ETF

Even though the early groundwork had been laid for what would eventually become their newest venture, Khatua and Amador went their separate ways after the program ended. But the gears in Khatua's head never stopped turning, and in September 2016 he invited Amador to join him in building a product that would combine their respective areas of expertise.

Amador took some time to think about it. In his mind, the result would be an AI-powered quantitative hedge fund, and he wasn't sure if he wanted to give up his job at Fidelity for that. But Khatua had other ideas: He wanted to build and launch an ETF.

To him, the ideal application for his technology was to get it into as many hands as possible. And if he combined it with Amador's investment prowess, they could build an ETF available to be traded by the average person with a brokerage account.

Chida Khatua

"Working at Intel gave me insight into how machine learnings and AI technology is maturing and how the benefits it offers can really be maximized," Khatua tells Business Insider. "It gave me a unique perspective, and I asked myself for a while when the right time would be to go out and create some product that can help many people."

Acting like a rational investor

A big part of Amador's decision to ultimately join Khatua in pursuing an ETF was the latter's acceptance into the highest tier of the IBM Global Entrepreneurship Program. After all, his machine learning and AI efforts were powered by the company's Watson supercomputer.

That gave Khatua $125,000 with which to pursue his idea, and it provided Amador crucial validation for the endeavor. He joined up shortly thereafter, and the duo launched Equbot.

Then they put Watson to work. The eventual result was the recently launched AI Powered Equity ETF (ticker: AIEQ), which analyzes more data than humanly possible, all in the pursuit of building the perfect portfolio of 30 to 70 stocks. And the technology enables it to do that while constantly analyzing information for 6,000 US-listed companies.

Art Amador Equbot

But there's a wrinkle. Equbot's AI model is built to act like a rational investor. In addition to analyzing regulatory filings, quarterly news releases, articles, social-media postings, and management teams, it's also designed to assess market sentiment and weed out potentially faulty inputs — including so-called fake news.

"A rational investor looks at a company as a whole and they draw insight into what’s right looking at the complete picture," Khatua says. "The AI model helps us do that. The technology doesn’t only help you decide what to do; it can also educate you on why it’s happening."

The technology doesn’t only help you decide what to do; it can also educate you on why it’s happening.

That's a key element of AIEQ and one that sets it apart from the hedge funds that use AI to construct trading strategies. Khatua says many of those models function as a "conceptual black box," because the presence of certain stocks can't be explained in a rational way. In his mind, Equbot's ETF offers the best of both worlds: It's based on a mountain of analysis and the stock-picking methodology can be explained.

"We know why something's in our portfolio after our system chooses it," Amador says. "'The system picked it' is not usually an explanation that investors will buy."

Further, the machine-learning aspect of AIEQ is crucial in avoiding human error. Amador points out that even if a firm had 6,000 analysts each responsible for reading 150 to 200 articles about one stock each day, that work would have to be cross-referenced against the findings of all other employees, then funneled into one objective opinion.

"Humans don’t have the speed, capacity, or retention to do this," he says.

The story so far

AIEQ has slid 0.9% since its launch on October 18, while the benchmark S&P 500 has risen 1.6%. The biggest laggards in the fund are Lifepoint Health, Newell Brands and Vista Outdoor, which have each dropped more than 20% over the period.

But it's far too early to judge the success of AIEQ based on five weeks of returns. The more telling statistic is the volume of shares traded. The ETF has seen an average of 259,000 units change hands daily, a strong showing for a fledgling fund. It had about $70 million in assets on Monday, roughly 10 times its size during the first week of trading.

The way that Khatua and Amador see it, interest in their product will continue to grow as long as personal bias continues to cloud investment decisions — something they see happening even at the highest level of professional money management.

"You can remove that by making this investment process more autonomous, as we've done," Amador says. "It's nothing against people. It's just human instinct."

SEE ALSO: The stock market's robot revolution is here

Join the conversation about this story »

NOW WATCH: A senior investment officer at a $695 billion firm breaks down tax reform

China's race for artificial-intelligence technology may give its military an edge over the US

$
0
0

artificial intelligence ai alibaba

WASHINGTON (Reuters) - A research arm of the US intelligence community just wrapped up a competition to see who could develop the best facial recognition technology. The challenge: identify as many passengers as possible walking on an aircraft boarding ramp.

Of all the entries, it was a Chinese start-up company called Yitu Tech that walked away with the $25,000 prize this month, the highest of three cash awards.

The competition was one of many examples cited in a report by a US-based think tank about how China's military might leverage its country's rapid advances in artificial intelligence to modernize its armed forces and, potentially, seek advantages against the United States.

"China is no longer in a position of technological inferiority relative to the United States but rather has become a true peer (competitor) that may have the capability to overtake the United States in AI," said the report, written by Elsa Kania at the Center for a New American Security (CNAS) and due to be released on Tuesday.

Future US-China competition in AI, Kania wrote, "could alter future economic and military balances of power."

artificial intelligence robot

Alphabet Inc's Executive Chairman Eric Schmidt, who heads a Pentagon advisory board, delivered a similar warning about China's potential at a recent gathering in Washington.

Schmidt noted that China's national plan for the future of artificial intelligence, announced in July, calls for catching up to the United States in the coming years and eventually becoming the world's primary AI innovation center.

"I'm assuming that our lead will continue over the next five years, and that China will catch up extremely quickly. So, in five years we'll kind of be at the same level, possibly," Schmidt said told the conference, which was also hosted by CNAS.

An unreleased Pentagon document, viewed by Reuters, warned earlier this year that Chinese firms were skirting US oversight and gaining access to sensitive US AI technology with potential military applications by buying stakes in US firms.

In response, a bipartisan group of lawmakers in the US Senate and House of Representatives this month introduced bills to toughen US foreign investment rules.

The CNAS report noted the Chinese acquisitions and said Beijing faces hurdles to forging a domestic AI industry to rival the United States, including recruiting top talent.

Schmidt, however, expressed confidence in China's ability.

"If you have any kind of ... concern that, somehow their system and educational system is not going to produce the kind of people that I'm talking about, you're wrong," he said.

Battlefield 'singularity'

military robot

Artificial intelligence, which promises to revolutionize transportation with the advent of self-driving cars and bring major advances to medicine, is also expected to have military applications that could alter the battlefield.

Some machine learning technology is already being applied to a Pentagon project that aims to have computers help sift through drone footage, reducing the work for human analysts.

China's People's Liberation Army is also investing in a range of AI-related projects and PLA research institutes are partnering with the Chinese defense industry, the report said, citing publicly available documents.

"The PLA anticipates that the advent of AI could fundamentally change the character of warfare," the report said.

china robot

Kania acknowledged that much of her research was speculative, given the early stages of AI development and policies surrounding it in China and elsewhere.

Still, she said some PLA thinkers anticipate the approach of a "singularity" on the battlefield, where humans can no longer keep pace with the speed and tempo of machine-led decisions during combat, the report said.

The report quoted PLA Lieutenant General Liu Guozhi, the director of the Central Military Commission's Science and Technology Commission, warning "(we) must ... seize the opportunity to change paradigms."

Although Pentagon policy currently calls for a human role in offensive actions carried out by machines, it was unclear whether China's People's Liberation Army would adopt such a policy, the report said.

"The PLA may leverage AI in unique and perhaps unexpected ways, likely less constrained by the legal and ethical concerns prominent in US thinking," Kania wrote.

(Reporting by Phil Stewart; Additional reporting by Cate Cadell in Beijing; Editing by Lisa Shumaker)

SEE ALSO: The US Air Force's top officer wants the light-attack aircraft to be part of a high-tech battlefield

Join the conversation about this story »

NOW WATCH: Elon Musk’s artificial intelligence company created virtual robots that can sumo wrestle and play soccer

Tech giants are fighting to hire the best AI talent at the NIPS conference in LA this week

$
0
0

Chris Bishop June 2015 Image 3

  • The NIPS conference has become the AI hiring event of the year
  • Big tech firms like Google, Facebook, and Microsoft send armies of people to try to find machine learning experts to join their ranks
  • Salaries on offer often run into the hundreds of thousands of pounds.


The global war for artificial intelligence (AI) talent is raging, with tech giants fighting it out to hire the brightest minds in the field and use them to take their platforms into unchartered waters.

Finding the top people isn't easy. There's currently a shortage of people with the skills and experience needed to make breakthroughs in machine learning, a field of computer science that gives machines the ability to learn without being explicitly programmed. Fortunately, many of the top minds in the field are going to be concentrated in one place this week when they descend on a conference in Long Beach, California, called NIPS, which stands for neural information processing systems.

Google, Microsoft, DeepMind, Facebook, Intel, Nvidia, Amazon, Apple, and Open AI (Elon Musk's AI research lab) will all be at NIPS presenting their latest research and looking to hire people from rival firms, as well as PhD students fresh out of universities like Stanford, MIT, Oxford, Cambridge, and Imperial.

"The NIPS conference that is once a year in December is unofficially the hiring event of the year," a Microsoft Research spokesperson told Business Insider last week. "So Microsoft is one of the companies that will be there representing the UK and looking to find those researchers that not only are the world’s best but also fit with our values. We've got quite a stake in this."

 

In some respects, AI specialists are quickly becoming the new investment bankers. Multibillion tech companies are willing to pay big annual salaries — often in excess of £100,000 and sometimes up to £1 million — to people that know how to programme machines to learn, according to Nick Bostrom, a philosopher at the University of Oxford. He told a select committee on artificial intelligence in October that many of these people are still in their mid-twenties.

Chris Bishop, head of Microsoft Research in Cambridge, told Business Insider that he will be on the lookout for potential recruits at the NIPS conference this week. Microsoft is one of the biggest sponsors at the event and he'll be manning Microsoft's Research stand, among other things.

"What we're seeing is this tremendous growth in demand for talent and right now that's being led by the big tech companies, including Microsoft, who are focusing on artificial intelligence and machine learning," said Bishop. "We're seeing market forces at play, we're seeing a lot of very smart young people choosing to go into this field but it takes time."

In a few years time it won't just be big tech companies that want to hire machine learning and AI talent, according to Bishop.

"As machine learning becomes ever more pervasive we'll see many different sectors — finance, manufacturing, retail, healthcare, across the whole spectrum — looking for talent in this space," he said. "So I think the [talent shortage[ phenomenon we're seeing will be with us for some number of years. It's very hard to predict of course but I don't see it changing dramatically any time soon."

But the near term AI developments over the next decade or two won't lead to the sci-fi AI scenarios painted in Hollywood movies, where machines with human level intelligence (or even superintelligence) roam around our planet of their own accord, according to Bishop.

"Artificial intelligence in the true sense is an aspirational goal for the future which will be underpinned by machine learning but in the short term — in the next decade or two at least — there will be many practical applications of machine learning that I wouldn't really call true intelligence but of huge practical value," he said.

"So we need to upskill people as the nature of the way software gets created is changing. Not for all software but for much of it."

Join the conversation about this story »

NOW WATCH: Sophia, the world's first-ever robot citizen, has a message for humanity this Thanksgiving

Artificial intelligence isn’t just going to transform your business — it’s going to change technology itself

$
0
0

IBM_PowerballBob Picciano, senior vice president, IBM Cognitive Systems

Open any business publication or digital journal today, and you will read about the promise of AI, known as artificial or augmented intelligence, and how it will transform your business. The fact is: AI will not only transform your entire business — whether you are in healthcare, finance, retail, or manufacturing — but it will also transform technology itself. 

The essential task of information technology (IT) — and how we measure its value — has reached an inflection point.

It's no longer just about process automation and codifying business logic. Instead, insight is the new currency. The speed with which we can scale that insight and the knowledge it brings is the basis for value creation and the key to competitive advantage.

This trend is fueling a surging interest in deep learning and AI, or, as IBM calls it, cognitive computing. According to IDC, global spending on AI-related hardware and software is expected to exceed $57.6 billion in 2021, almost a five-fold increase over the $12 billion that will be spent this year.

The real promise of AI is to unleash actionable insights that would otherwise be trapped in massive amounts of data. Much of that data is unstructured data — or the data generated by such things as written reports and journals, videos, social media posts, or even spoken words.

Since we introduced IBM Watson, and our powerful AI cloud platform, we’ve continued on our journey to reinvent computing for this new era. And we’ve learned that to meet the new demands of cognitive workloads, we need to change everything: from the algorithms and mathematics that are the foundations of the software, to the hardware that drives it, and to the cloud that deploys it. 

Organizations that apply deep learning and AI, which are the superchargers for extracting insight, need the right architecture to ingest and analyze very large data sets. And you need to be able to do it at lightning-fast speeds, or faster than your competitors.

IBM is unveiling new systems built from the ground up to meet the unique demands of the AI era. They are POWER9, the first processor designed specifically for AI, as well as the next-generation POWER9-based IBM Power System AC922. These new Power Systems are powerful in their own right, but also are designed to exploit specialized silicon, such as graphical processing units (GPUs), which accelerate the type of math and information processing to power new cognitive algorithms. 

The result is an AI superhighway for insights that can drive transformational outcomes for clients in every industry.

A case in point: the US Department of Energy (DOE) Summit and Sierra supercomputers which are soon to be among the most powerful supercomputers in the world and are equipped with POWER9 processors and our partner NVIDIA’s newest Volta-based Tesla GPUs. The DOE’s goal is to create the world’s fastest supercomputer at 200 Petaflops — giving it the ability to do 2 billion calculations, 1 million times, every second. That’s an enormous amount of computing power directed at solving the world’s most complex problems.

Google also is tapping into the latest POWER technology to allow for further opportunities for innovation in its datacenters.

IBM continues to pioneer new advances in silicon, hardware, and software as well as an open ecosystem, which we view as "innovation protection" to ensure that innovations are quickly brought to market to meet clients’ dynamic infrastructure needs.

We also believe in taking an integrated approach to cognitive infrastructure — with both software and hardware that are optimized to work together, while tapping into IBM research innovations such as distributed deep learning on PowerAI.

The real promise of AI is to fundamentally transform industries and professions. The goal is to enable a new understanding of customers and markets, risks and opportunities, and opening new frontiers in innovation for organizations and society.

This post is sponsor content from IBM and was created by IBM and BI Studios.

Join the conversation about this story »

Trump's speech has baffled translators and linguists since his campaign — and an AI program might shed light on why

$
0
0

Donald Trump

  • An AI transcription tool found that Donald Trump is one of the hardest politicians to understand.
  • Some of Trump's speech habits, like turning away from the microphone and speaking in a stream-of-consciousness style, made it difficult for the software to relay his words.
  • Nikki Haley was the easiest politician for the AI to understand.


The unique speech patterns of President Donald Trump have posed problems for translators and linguists since he launched his campaign in 2015.

Now, we can add artificial intelligence to the list.

Trump was one of the politicians that were hardest to understand for Trint, a software program that uses AI to generate transcripts of audio and video, the company said in November.

Trint ran speeches from more than a dozen prominent politicians through its software and found that the transcription of Trump's speech had a greater rate of errors than speeches from most other politicians — he ranked 11th out of the 15 people Trint analyzed.

Granted, the program still transcribed 97.89% of Trump's speech correctly. But it's a notch lower than the near-perfect transcriptions of speeches from UN Ambassador Nikki Haley, Hillary Clinton, and Senate Minority Leader Chuck Schumer, whose speeches were transcribed more than 99% correctly.

So what explains the drop-off for Trump? Some of Trump's public-speaking tics could explain it, Trint co-founder Jeff Kofman said.

"He has a tendency to want to swallow some of the prefixes and suffixes in his words," Kofman told Business Insider. In the June 2017 speech he looked at, for example, it was difficult to hear the first syllable when Trump said "incredible," leading Trint to record "credible."

Trump also has a habit of turning away from the microphone and addressing people onstage, something that drove the AI transcriber crazy.

Lastly, Trump's stream-of-consciousness delivery makes it difficult for a bot to follow the syntax of his sentences.

"That makes it really challenging to sort of see the logic flow in a sentence and to put grammar to it, for now," Kofman said. "Punctuation is a real challenge for artificial intelligence … We often don't speak in logical sentences or sequences."

In other words, it may be a while before a bot can perfectly understand the president.

SEE ALSO: Everyone is blasting Trump for writing 'mike' instead of 'mic' — but here's why Trump is right

DON'T MISS: 'The blacks,' 'the gays,' 'the Muslims' — linguists explain one of Donald Trump's most unusual speech tics

Join the conversation about this story »

NOW WATCH: Elon Musk and Mark Zuckerberg are waging a war of words over the future of AI

Viewing all 1375 articles
Browse latest View live