Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

This is what 100 'perfect' selfies look like

$
0
0

Ever wonder how to snap a selfie that will break the internet? Luckily for you a computer program has already figured out the secrets to getting likes, favorites, and hearts.

An artificial intelligence (AI) system built by Stanford University researcher Andrej Karpathy looked at 2 million selfies to train itself to recognize the ones most likely to get a lot of love.

Below is the cream of the crop — the top 100 images in a set of 50,000 photos that the algorithm hadn't seen before.

You'll see that not even one man is included in the best 100 shots, and there are very few people of color:

Best 100 selfies

Here's a few (blunt and possibly offensive) rules the program came up with on its own for taking effective selfies, according to Karpathy's blog post about the project:

1) Be female.

2) Show your long hair.

3) Take it alone.

4) Use a light background or a filter. Selfies that were very washed out, filtered black and white, or had a border got more likes. According to Karpathy, " over-saturated lighting ... often makes the face look much more uniform and faded out."

5) Crop the image. Make sure your forehead gets cut off and your face is prominently in the middle third of the photo. Some of the "best" selfies are also slightly tilted.

And here are the male images that did the best, you can see similar trends cropping up, especially the number of images with white borders. But the rules do change slightly — the male images more frequently included the whole head and shoulders, Karpathy writes.

malesOn the other hand, the worst images, or the selfies that probably wouldn't get as many likes, were group shots, badly lit, and often too close up.

So if you want your selfie to get a lot of love, make sure you follow the rules above.

Guia Marie Del Prado wrote a previous version of this post.

Join the conversation about this story »

NOW WATCH: There's something unsettling about these photos from NASA's Curiosity Mars rover


Siri on the Mac is incredibly awkward to use in public (AAPL)

$
0
0

her joaquin phoenix

One of the biggest changes in macOS Sierra, the new operating system for Mac laptops and desktops coming this fall, is the addition of Siri.

Apple thinks people might want Siri's help finding documents on their Mac, or changing one's settings on the fly, or finding old emails. And that's exactly what Siri can do.

It's a good idea in theory; but in practice, it's incredibly awkward. I can't imagine anyone ever using Siri on their Mac in public, especially in a work setting.

For the past week, my colleague Steve Kovach was testing out a beta version of macOS Sierra and he was constantly giving it commands and questions like "Send an email to Alyson," or "Turn on Bluetooth."

At home, using Siri on the Mac would probably be fine. I talk to my Amazon Echo all the time at home, asking for the time or the weather or a song request, and I never feel weird about it. But in a work setting, it was hard not to chuckle at these commands — regardless of whether or not Siri gave the correct response (it often didn't, but this is beta software, after all).

And so, it's disappointing to know that the standout feature of macOS Sierra, the first version of Apple's Mac operating system that doesn't belong to the OS X moniker, can only be used in some settings, and it won't help me at work at all. It's the same reason I don't use Siri on my iPhone at work — it's flat out embarrassing to have other people listen to me talking to a robot.

I think Siri on the Mac could keep getting better — down the road, I could see people using it like Joaquin Phoenix's character used his Samantha AI in the movie "Her," where he had full conversations with his personal assistant, which was also able to dictate everything he said, write emails for him, etc. But right now, Siri on the Mac is just plain awkward, and to me, it definitely won't change the way I interact with my computer anytime soon.

Join the conversation about this story »

NOW WATCH: This new feature makes Apple TV so much easier to use

Scientists have built a machine that can visualize thoughts from brain scans

$
0
0

Human brain scan

If you think your mind is the only safe place left for all your secrets, think again, because scientists are making real steps towards reading your thoughts and putting them on a screen for everyone to see. 

A team from the University of Oregon has built a system that can read people’s thoughts via brain scans, and reconstruct the faces they were visualising in their heads. As you’ll soon see, the results were pretty damn creepy. 

"We can take someone’s memory - which is typically something internal and private - and we can pull it out from their brains," one of the team, neuroscientist Brice Kuhl, told Brian Resnick at Vox.

Here’s how it works. The researchers selected 23 volunteers, and compiled a set of 1,000 colour photos of random people’s faces. The volunteers were shown these pictures while hooked up to an fMRI machine, which detects subtle changes in the blood flow of the brain to measure their neurological activity.brain

Also hooked up to the fMRI machine is an artificial intelligence program that reads the brain activity of the participants, while taking in a mathematical description of each face they were exposed to in real time. The researchers assigned 300 numbers to certain physical features on the faces to help the AI 'see' them as code.

Basically, this first phase was a training session for the AI - it needed to learn how certain bursts of neurological activity correlated to certain physical features on the faces.

Once the AI had formed enough brain activity-face code match-ups, the team started phase two of the experiment. This time, the AI was hooked up to the fMRI machine only, and had to figure out what the faces looked like based only on the participants’ brain activity. 

All the faces shown to the participants in this round were completely different from the previous round.

The machine managed to reconstruct each face based on activity from two separate regions in the brain: the angular gyrus (ANG), which is involved in a number of processes related to language, number processing, spatial awareness, and the formation of vivid memories; and the occipitotemporal cortex (OTC), which processes visual cues.

You can see the (really strange) results below:

mind-reading-machine-face-compile

So, um, yep, we’re not going to be strapping down criminals and drawing perfect reconstructions of a crime scene based on their memories, or using the memories of victims to construct mug shots of criminals, any time soon. 

But the researchers proved something very important: as Resnick reports for Vox, when they showed the strange reconstructions to another set of participants, they could correctly answer questions that described the original faces seen by the group hooked up to the fMRI machine.

"[The researchers] showed these reconstructed images to a separate group of online survey respondents and asked simple questions like, 'Is this male or female?' 'Is this person happy or sad?' and 'Is their skin colour light or dark?’ 

To a degree greater than chance, the responses checked out. These basic details of the faces can be gleaned from mind reading."

So one set of people could read the thoughts of another set of people - to a point - via a machine. 

The team is now working on an even tougher task - getting their participants to see a face, hold it in their memory, and then get the AI to reconstruct it based on the person’s memory of what the face looked like.

As you can imagine, this is SUPER hard to do, and the results make that pretty obvious:

machine-compiled-faces

It's janky as hell, but there's potential here, and that’s pretty freaking cool, especially when we consider just how fast technology like this can advance given the right resources.

Maybe one day we'll be able cut out the middle man and send pictures - not just words - directly to each other using just our thoughts... No, argh, stop sending me telepathy porn, damn it.

The study has been published in The Journal of Neuroscience.

Join the conversation about this story »

NOW WATCH: Scientists created a 3D hologram of the brain's connections and saw something amazing

This scientist thinks Elon Musk is wrong about the threat of artificial intelligence

Google gains confidence in RankBrain and deploys it for all search results

$
0
0

artificial intelligenceThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Google is now using RankBrain, its machine-learning system, to process the more than 2 trillion queries sent each year through its search engine, according to Search Engine Land.

This is a considerable increase from when RankBrain was first introduced in Q3 2015, when it was used for roughly 15% of search queries.

The full time deployment of RankBrain will make Google Search far more intuitive for users. RankBrain “interprets” users’ search terms using machine learning — a type of artificial intelligence — to find pages that may not have contained the exact keywords that were used in the search query.

The system is able to “guess” what a user means when they type in an ambiguous search query and then pass along its assumptions to the search platform. Furthermore, it learns over time, better refining search results. This will help surface more relevant information for users, rather than just relying on keywords and links.

Moving forward, RankBrain could help improve app discoverability as Google continues to merge the two worlds of the mobile web and apps. Google includes a number of apps within its Search results and has also introduced App Streaming, which allows users to “stream” apps that appear in their results without having to download them.

By improving the accuracy and relevancy of users’ queries, Google could help smaller developers get their apps in front of consumers. 

To receive stories like this one directly to your inbox every morning, sign up for the Apps and Platforms Briefing newsletter.Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

One of these poems was written by an AI — can you guess which one?

$
0
0

Artificial Intelligence

Artificial intelligence has competed against humans as novelists, go players, hackers, and pathologists. Now, AI algorithms have taken on the role of poet through PoetiX, a competition arranged by the Neukom Institute for Computational Science at Dartmouth College.

The PoetiX challenge asked programmers to create software that can write a sonnet based on a single noun or noun phrase prompt. The best of these algorithms were then entered into the PoetiX Turing Test, which pitted them against human poets, and asked a panel of judges to determine which of the sonnets were written by man and which we written by a machine.

Each of the judges could pinpoint the software-written sonnets, reports NPR's All Tech Considered. How do you think you'd do? Only one of the sonnets below was written by an algorithm. Find the answer below.

Sonnet #1

A green nub pushes up from moist, dark soil.
Three weeks without stirring, now without strife
From the unknown depths of a thumbpot life
In patient rhythm slides forth without turmoil,
A tiny green thing poking through its sheath.
Shall I see the world? Yes, it is bright.
Silent and slow it stretches for the light
And opens, uncurling, above and beneath.
The sun warms it and with a little time
Another slight leaf joins its neighbor,
They crown slowly and birth without labor
Feeding on the air's breath like a rhyme.
How can we know with body and with brain,
The force that makes the earth suck up the rain.

Sonnet #2

The dirty rusty wooden dresser drawer.
A couple million people wearing drawers,
Or looking through a lonely oven door,
Flowers covered under marble floors.
And lying sleeping on an open bed.
And I remember having started tripping,
Or any angel hanging overhead,
Without another cup of coffee dripping.

Surrounded by a pretty little sergeant,
Another morning at an early crawl.
And from the other side of my apartment,
empty room behind the inner wall.
A thousand pictures on the kitchen floor,
Talked about a hundred years or more.

Sonnet #3

And what if from distress comes something fine,
And following this dress-rehearsal pain
Gives way to Joy, mistress of ardor, art,
And love, who sure this mess would straighten out?

For Joy has no illusions of a break;
She brooks many ill fusions of extremes,
And shares her light 'till few suns could compete;
Her binding love makes twos ones and keeps peace.

So best not make a strumpet of this Joy,
Assert that she pays some debt with her smile,
Or name to her a numb set of stale sparks;
She never has succumbed yet, bless her heart.

Her love is full and indiscriminate
And even so you'll find no sin in it.

If you guessed number one or three, you've been fooled. Number one was written by a human named Ivy Schweitzer and number three was written by a human named Kurtis Hessel.

The second sonnet, however, was written by an algorithm that was programmed by Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight from the University of Southern California Information Sciences Institute.

SEE ALSO: ‘Aggressive’ AI destroys veteran pilot in air combat simulator

Join the conversation about this story »

NOW WATCH: This scientist thinks Elon Musk is wrong about the threat of artificial intelligence

These 5 tech CEOs love AI, even if it could take over the world someday

$
0
0

terminator

Artificial intelligence has been a popular topic for decades.

"2001: A Space Odyssey" came out in 1968, and introduced the world to HAL 9000, a smart computer that ended up being homicidal.

Evil artificial intelligence is mostly a product of science fiction so far, but as technology advances, it is starting to seem more and more plausible.

Siri is just a harmless assistant, and Poncho is a cute little weather cat, but soon we could have highways full of driverless cars and skies full of autonomous drones that we wouldn't want to go rogue. 

This has led several tech CEOs to speak openly about the potential negative effects of AI, and many of them have started research and education projects aimed at making AI as useful and human-friendly as possible.

"The Matrix,""Terminator" and even Auto, the pilot robot in "WALL-E," depict a future where AI has gone bad. So let's hope these CEOs listed below know what they are doing.

Eric Schmidt - executive chairman of Alphabet, former CEO of Google

Google has a bit of a sunnier outlook on AI than most.

"Some voices have fanned fears of AI and called for urgent measures to avoid a hypothetical dystopia," Schmidt wrote in Fortune.

"We take a much more optimistic view."

The Fortune op-ed, written with Google X founder Sebastian Thrun,was headlined "Let's stop freaking out about AI," and painted a rosier view than other CEOs on this list.

Google's AI research is particularly worrisome to some because of the amount of data it collects from its users. 

Google recently mastered the notoriously complicated game of Go, beating a Grandmaster using only artificial intelligence. 

Google set up an AI ethics board to help quell some of its critics, but refuses to reveal who sits on the board.



Elon Musk - CEO of Tesla and Space X

Musk is perhaps the most prominent voice warning us about the impact of AI. He helped start Open AI, an open source, non-profit company that hopes to develop benevolent AI technology.

He told the audience at Recode's Code Conference that there is only one company's artificial intelligence he is worried about, strongly hinting at Google, but not confirming the exact company.

Musk is so vocal, perhaps because of his view of how humans and technology will interact in the future.

Musk said that he expects a "neural lace" technology to help humans interface directly with computers, and he thinks humans are already cyborgs.

Both ideas are the kind of far-out concepts that we have come to expect from the eccentric CEO, but if he's right, AI could have even more access to humans minds and bodies.

That means making sure future AI tech is not like the it was in "Terminator" or the "Matrix" is probably in our best interest.

"It's really just trying to increase the probability that the future will be good," Musk said about his AI research at the Code Conference.



Satya Nadella - CEO of Microsoft

"Depending on whom you listen to, the so-called “singularity,” that moment when computer intelligence will surpass human intelligence, might occur by the year 2100—or it’s simply the stuff of science fiction," Nadella said in a recent Slate op-ed.

"I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology."

One of Microsoft's most recent forays into AI ended up becoming a genocidal racist. The Twitter-bot only lived within the context of social media which rendered it pretty harmless, but it demonstrated how quickly things can go wrong. 

As AI enters more areas of our lives, it has the potential to impact us in untold ways. This is why Nadella says we have to be compassionate toward AI as we teach it to be smarter.

He lays out four rules humans should follow as it begins to create more and more AI. They are Empathy, Education, Creativity and Judgment and Accountability.

Read the details about his rules on Slate.



See the rest of the story at Business Insider

Facebook updates simplify bot interactions (FB)

$
0
0

Facebook MessengerThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Last week, Facebook unveiled a number of new features for bot developers on the Messenger Platform aimed at making the conversational tech more accessible to users.

The updates, which include features like pre-set reply buttons, and a rating system for users, should help make bot interactions more intuitive and seamless.   

The updates come as the growth of Messenger’s bot ecosystem hits a speed bump. Although more than 11,000 bots have been launched on Messenger since its launch in April, that’s only a small lift from the 10,000 bots the company reported in May. 

In essence, Facebook is joining together the conversational responsiveness and accessibility of chatbots with the intuitive interface of apps. This is a vital evolution for Facebook as it attempts to gain the necessary traction for the nascent technology. Facebook hopes the updates and additions to the bot interface will incentivize further engagement with users, which, in turn, will entice more developers to the platform.

Here are the most important updates Facebook is bringing to Messenger:

  • Ratings: Users can now provide star rating and open-text feedback for bot developers. This will help developers get a better feel for how to improve their software.
  • Quick replies: These are pre-assigned action buttons developers can deploy to help users navigate the bot. For example, a movie bot might ask what type of genre a user likes. An array of buttons will appear at the bottom of the screen giving users options for responses. The buttons also help signpost the bot’s capabilities. This solves one of the main pain points of chat bots: Many users don’t know what they have to write to interact with the bots, or what types of tasks bots are capable of completing. 
  • Persistent menus: a list of up to five commands that developers can place in the chat interface. These will eliminate the need for users to remember text commands as well as provide a way to re-engage lapsed conversations.
  • Account linking: This is a secure protocol that lets businesses connect existing customers’ accounts with their Messenger accounts. When a consumer begins a conversation with a business on Messenger, that business will be able to see if the user is already an account holder with the company. Account linking will take away certain friction points, such as getting account information, identity verification, or the type of account the customer has.

Will McKitterick, senior research analyst for BI Intelligence, has compiled a detailed report on messaging apps that takes a close look at the size of the messaging app market, how these apps are changing, and the types of opportunities for monetization that have emerged from the growing audience that uses messaging services daily.

Here are some of the key takeaways from the report:

  • Mobile messaging apps are massive. The largest services have hundreds of millions of monthly active users (MAU). Falling data prices, cheaper devices, and improved features are helping propel their growth.
  • Messaging apps are about more than messaging. The first stage of the chat app revolution was focused on growth. In the next phase, companies will focus on building out services and monetizing chat apps’ massive user base.
  • Popular Asian messaging apps like WeChat, KakaoTalk, and LINE have taken the lead in finding innovative ways to keep users engaged. They’ve also built successful strategies for monetizing their services.
  • Media companies, and marketers are still investing more time and resources into social networks like Facebook and Twitter than they are into messaging services. That will change as messaging companies build out their services and provide more avenues for connecting brands, publishers, and advertisers with users.

In full, this report:

  • Gives a high-level overview of the messaging market in the US by comparing total monthly active users for the top chat apps.
  • Examines the user behavior of chat app users, specifically what makes them so attractive to brands, publishers, and advertisers.
  • Identifies what distinguishes chat apps in the West from their counterparts in the East.
  • Discusses the potentially lucrative avenues companies are pursuing to monetize their services.
  • Offers key insights and implications for marketers as they consider interacting with users through these new platforms.

To get your copy of this invaluable guide, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the future of messaging apps.

Join the conversation about this story »


Law firms of the future will be filled with robot lawyers

$
0
0

robot lawyer

We may need to start rewriting our precious lawyer jokes — smart, time-saving computers are quickly elevating the profession.

Instead of hiring expensive assistants to pore over cases and sort through tickets, law firms are increasingly turning toward artificially-intelligent machines to do the expensive menial jobs instead.

They are creating a future in which a costly and inefficient legal system actually becomes an attractive way for the average citizen to protect his or her civil liberties.

The first AI lawyer gets hired

Andrew Arruda, the CEO and co-founder of ROSS Intelligence, tells Tech Insider that "AI-enabled software is going to become very much the status quo and very normal" in the coming decade.

Arruda's company recently deployed the ROSS software at a handful of law firms throughout the US.

ROSS uses the supercomputing power of IBM Watson to comb through huge batches of data and, over time, learn how to best serve its users. The software can sort through something in a matter of seconds that would normally takes a human hours upon hours to review.

IBM WatsonOne of the first places to use ROSS was the law firm BakerHostetler, where the software handles bankruptcy cases. Employees enter commands into the software in everyday language, like when they need to find examples of precedence for specific cases. ROSS then searches through its legal database to produce the relevant information.

In the event a new court decision emerges in the dead of night, ROSS can even send alerts in real-time.

Though some employees will no doubt lose their jobs (as tends to happen when AI enters the scene), the benefits of AI quickly trickle down for both a firm and its clients.

Currently, 80% of Americans who need a lawyer can't afford one. By using AI lawyers like ROSS, law firms can charge lower fees since they won't be paying humans (who generally prefer to get paid for their work) to handle clients' cases. In addition, out-of-work lawyers can use AI services like ROSS, which offer a lower barrier of entry into the market, to create more affordable options for clients.

Not all legal issues are big bankruptcy cases, though. For most people, the closest they'll ever come to a courtroom is getting a parking ticket. Here too AI is disrupting the way people interact with the legal system.

robot lawyer

Save your money, use a chatbot instead

In the fall of 2015, 19-year-old Stanford undergrad Joshua Browder released a chatbot called DoNotPay. (Browder, it turns out, is good friends with Arruda. Silicon Valley is a small world, and legal AI is even smaller.)

Browder's app enables people to appeal unfair parking tickets without forking over hundreds of dollars in legal fees, which, in many cases, can be more expensive than just paying the ticket.

Like ROSS, DoNotPay relies on an algorithm that tries to decipher everyday language. Even in beta, it seems to be doing its job. In April, Browder released data showing DoNotPay had helped people overturn 160,000 of 250,000 parking tickets since launch — a success rate of 64%.

The beauty of machine learning, Arruda points out, is that programs like ROSS and DoNotPay only get more sophisticated with time. The more words the software sees, the better it can correct past mistakes, ultimately becoming a more helpful service.

The AI lawyer is the way of the future

divorce cakeThat also means AI will get more sophisticated in the kinds of work it does. Arruda expects AI to start drafting its own documents, building arguments, and comparing and contrasting past cases with the one at hand.

"Law touches everything," Arruda says, likening the practice to "the operating system of society" in that it governs how people get married, get divorced, file patents, and build businesses. Artificial intelligence could, in theory, seep into all of these areas.

But there are limits. Neither Arruda nor Browder believe society will see artificially-intelligent lawyers arguing cases in the courtroom. (Legally speaking, only humans can do that anyway.) A more likely scenario is that AI will serve humans in the way many advanced chess players have started using AI to help them play better matches — a form of play known as "centaur chess."

"At the center of AI systems are human who interact together," Arruda says. "I think what we're going to move toward are 'centaur lawyers,' if you will, where they both work together, and because of that union they're able to do so much more."

Join the conversation about this story »

NOW WATCH: This tiny robot can pull 2,000 times its own weight

Obama's top economic adviser doesn't like the idea of giving people money not to work

$
0
0

Jason Furman

President Obama's top economic adviser does not sound too excited about universal basic income.

In a speech at New York University on Thursday, Jason Furman — the chair of the president's Council of Economic Advisers — talked about artificial intelligence and the effect it was having, and will have, on the US economy.

And when one mentions AI, the subject of universal basic income is often not far behind.

UBI is more or less exactly what it sounds like: You give all citizens a set amount of money, whether they work or not.

But Furman, despite declaring at the top of his speech that he worried that the US was not using robots and AI enough, said a solution to any future economic tension created by the proliferation of robotics was unlikely to come in giving every citizen money.

Furman argued that policymakers ought to first focus on the skills, training, and job-search-assistance programs that could help foster increasing employment opportunities for citizens even in an ever-shifting labor landscape.

Furman also said the shortfalls of the US tax system and social-assistance programs, as well as the benefits of a universal basic income, were overstated.

And another worry of many UBI proponents — that increased automation of the workforce will enhance already-widening inequality in the economy — misses the mark, in Furman's view.

Here's Furman (emphasis mine):

"Even with these changes, however, new technologies can increase inequality and potentially even poverty through changes in the distribution of wages. Nevertheless, replacing our current antipoverty programs with UBI would in any realistic design make the distribution of income worse, not better. Our tax and transfer system is largely targeted towards those in the lower half of the income distribution, which means that it works to reduce both poverty and income inequality.

"Replacing part or all of that system with a universal cash grant, which would go to all Americans regardless of income, would mean that relatively less of the system was targeted towards those at the bottom — increasing, not decreasing, income inequality.

"Unless one was willing to take in a much larger share of the economy in tax revenues than at present, it would be difficult both to provide a common amount to all individuals and to make sure that amount was sufficient to cover the needs of the poorest households. And for any additional investments in the safety net that one would want to make — and the president has proposed numerous such investments — one must confront the same targeting question."

As noted previously, much of Furman's speech dealt with concerns that the US was not using AI enough to its advantage. That a top White House official is discussing at length AI and the exciting opportunities created by increased automation across our economy is, beyond any specific comment, the biggest takeaway from this speech.

But in thinking about what problems and solutions might be on offer for the US economy in the coming years and decades, Furman is still very much thinking inside the box.

Read Furman's full speech here »

SEE ALSO: Soon we'll give people money to do nothing, whether Bill Gross likes it or not

Join the conversation about this story »

NOW WATCH: People with these personality traits have more and better sex

A new popular app called Prisma has insanely cool photo filters that make Instagram's feel boring

$
0
0

A new iOS app is using artificial intelligence to turn your boring iPhone photos into artistic pieces of work — and I'm a huge fan.

Prisma app

Similar to VSCO, users simply upload their photo to the Prisma app and choose a filter that works best for them. Except rather than applying subtle changes to the photo, the app turns them into art, drawing inspiration from works like "Go for Baroque" by Roy Lichtenstein.

Here's how the app works and how to use it yourself:

First things first, download the app!

The Prisma app is available for free on iOS.



When you open the app, you can take a photo in real-time. I used my dog, Daphne, as my model.



Once you take the photo, you can choose from one of 33 filters. You can then swipe to the right or left to change the intensity of the filter applied.



See the rest of the story at Business Insider

This shipping command center looks right out of a sci-fi movie

The popular Prisma app is now on Android — here's how it works

$
0
0

Android users can now download Prisma, the popular app that's using artificial intelligence to turn your boring photos into works of art.

Prisma app

Similar to VSCO, you simply upload a photo to the Prisma app and choose a filter that works best. Except rather than applying subtle changes to the photo, the app turns that photo into art, drawing inspiration from works like "Go for Baroque" by Roy Lichtenstein.

Here's how the Prisma app works and how to use it yourself:

SEE ALSO: WALL STREET PAYDAY: A handful of tiny firms will make a killing from the Verizon-Yahoo deal

First things first, download the app!

The Prisma app is available for free on iOS and Android.



When you open the app, you can take a photo in real-time. I used my dog, Daphne, as my model.



Once you take the photo, you can choose from one of 33 filters. You can then swipe to the right or left to change the intensity of the filter applied.



See the rest of the story at Business Insider

A robot wrote the 'perfect' horror film — and could save the movie industry

$
0
0

suspiria still

Robots are smart enough to write now, which means it's only a matter of time before I'll lose my job. But for now, I might soon be able to enjoy "Impossible Things," the first feature-length movie written with artificial intelligence.

A company called Greenlight Essentials made an artificially intelligent robot that analyzes audience response data and writes stories based on what it thinks people will like.

With some help by the humans who made the AI, it wrote "Impossible Things,"a horror movie about a family who moves to the middle of nowhere and start hearing creepy things around the house. The trailer gives a pretty good idea of the tone.

Now that they have a screenplay and a trailer, Greenlight Essentials wants to make a full-blown movie. They started a Kickstarter campaign to raise the $22,843 more they need.

The greater promise of the AI behind "Impossible Things" is that it'll save the movie industry. 

Greenlight Essentials says the AI can look at screenplays and, based on the box office results of other movies, figure out if that screenplay will be profitable if its filmed as a movie.

According to the company, 87% of films in the box office fail to break even, a claim I couldn't corroborate anywhere. Jack Zhang, the founder and CEO of Greenlight Essentials, told INSIDER he got to it by comparing the production budget and box office revenue of films. However, studios tend to obscure the profits for individual films beyond just box office performance. They generate revenue by selling films to international distributors, by putting them on video-on-demand services like Netflix, by selling them on DVD, and with various merchandising opportunities.

Greenlight's 87% figure doesn't take any of that into account.

Big studios already use data models to predict box office performance, of course.

They're famously wary about spending tons of money on expensive products that don't have a proven track record. That's why they make so many sequels. 

Furthermore, large-scale data-driven screenplays have been tried as well, and it famously torpedoed Relativity Film into bankruptcy. So there are reasons to be wary.

Zhang, to his credit, takes the destruction of Relativity very seriously. He wrote a post explaining the problems with how they ran their models and draws a distinction between them and Greenlight's artificial intelligence software: "All models are wrong, but some are useful, and that is the logic behind Greenlight Essentials’ software: to be useful. We do not make any assumptions to build models or simulate risks; rather, we only use real-world real data points to draw real conclusions."

Those conclusions mean taking fewer risks. The trailer for "Impossible Things" doesn't look like anything we haven't seen before. If the AI is able to write new screenplays based on data from successful movies, then the movies it writes will look like ones that already exist.

The real use for the AI will be for independent filmmakers.

Indie filmmakers don't have teams of data analysts at their disposal to figure out if a movie will make money, like big studios do. For a big studio, using this system just makes an already risk-averse company even more risk-averse.

For an independent project, having a computer program you can buy to help out changes the game. Plus, it'll actually help you write your movie instead of just telling you what works and what doesn't.

Most of the Kickstarter rewards for "Impossible Things" are the usual — DVDs, T-shirts, posters — but once you donate about $800, you'll get six months of access to the AI software that made "Impossible Things." Then you can have it make your own screenplay.

 

Join the conversation about this story »

NOW WATCH: This lunch box for adults could change the way you eat

Foursquare's Dennis Crowley knows exactly where he's going – but it took a while

$
0
0

crowley cropped

When Foursquare launched its app in 2009, founder Dennis Crowley wasn't quite sure where he was going with it.

The idea was that people would use the app to "check in" to particular locations, which seemed like a fun and interesting game. But he had no idea how it would evolve, or how it would make money.

Now he knows.

Foursquare's business model is selling its unique location-based data to companies, such as mobile advertisers and app makers, that are eager to use it.

"We can make this work with 50 million monthly active users because we don't need to monetize the audience through advertising. We can monetize it through data licensing, technology licensing, advertising products we have built on top of our location-intelligence platform," Crowley told Business Insider from the company's new San Francisco office, located in the Financial District near Chinatown.

It's taken a long time to figure out, but that knowledge has given Foursquare a new purpose.

Getting beyond social

In January, after almost a decade running the company, Crowley stepped aside as CEO, handing the job over to COO Jeff Glueck, and taking over as executive chairman.

At the same time, the company raised a new $45 million financing round led by Union Square ventures with the participation of past investors like Andreessen Horowitz, but it reportedly had to take a cut in its valuation to do so. A report in Recode placed the company's value around $250 million, down from $650 million in 2013.

Foursquare Leadership Steven Rosenblatt, Dennis Crowley, Jeff Glueck

Crowley, who is just returning from paternity leave after the birth of his daughter 11 weeks ago, said that the experience gave everyone at the company a clear picture of where they were going.

"We started off as, 'hey, we're this social app. We're this social game.' And then you start getting compared to every other social player in the space," he said.

Foursquare has about 50 million unique monthly users across its two apps — the original check-in app, which was rebranded Swarm in 2014, and the revamped Foursquare app, which is now a recommendation app similar to Yelp.

That's a lot fewer than other social networks like Facebook, which announced that it passed an almost inconceivable 1.7 billion monthly users this week, or Facebook subsidiary Instagram, which is over 500 million. Meanwhile, Twitter, at 300 million users, is having trouble with Wall Street because that number has stopped growing.

But Foursquare isn't in the same market as social networks because it's not selling ads against its audience. Instead, it's selling data and the technology used to make sense of that data.

"If we were having this conversation four years ago, we didn't know how those pieces were going to come together. But now we've got a whole bunch of technology companies that are licensing our data," said Crowley. "We have this map of the world, these sensor readings that other people don't, so we can build technology that powers stuff that doesn't exist yet."

One example offered by Crowley: Third-party-ad networks feed Foursquare information from the ads they're serving users, including the latitude and longitude of those users. Foursquare can tell when that location matches a particular business location. It can then build profiles of users based on their activities — "this person's a coffee drinker, and this person's shopping for a car, and this person's a business traveler"— and sell those segments to advertisers.

Foursquare, of course, isn't the only company with big troves of location data, Google being the most obvious.

But Crowley believes Foursquare's data is better because it's not just a map of the world. It can take very specific "fingerprints" of sensor readings, like Bluetooth beacons or the Wi-Fi signal strength from nearby network, and use them to pinpoint your location in or near a particular business. Like a J Crew store. Or a bar.

Crowley calls that Foursquare's "superpower."

new foursquare app

"When a phone walks into a place and this is what the Wi-Fi is, and this is the Bluetooth, and this is the GPS, we can compare it back to this model that we have," Crowley explained.

"The tricky thing is the only way you can make that map, and do that mapping, is by having people say, 'I'm at Red Lobster,' over and over and over again."

And that's what Foursquare's apps have been getting people to do for almost seven years now. It gets more than 8 million of these new "fingerprints" every single day, as people in countries like the US, Brazil, Japan, and Turkey check in.

Sticking around through a down round

One testament to Foursquare's resilience was the fact that employees stuck around through a dramatic slash in the company's value — and the value of employee stock options — a process which one venture capitalist has likened to "brain damage."

How did Foursquare get people to stick around as it figured out where it was going?

The first key, said Crowley, was total transparency.

"The number-one thing we did right was we were super transparent about the decisions that we made, how they would affect employees, what we were going to do to continue to make this an awesome place to work and continue to make sure everyone's stock options would have great upside," he explained.

In fact, Crowley said, the company has always shared business details with the entire company, such as pitch decks and slides from board meetings, and that's helped make sure everybody understood what was happening.

In this case, he said, "When we announced all the changes we made, it was an hour-and-a-half long company meeting in which we walked people through all the financial details and walked them through the fund-raising process and showed them the deck. We don't pitch the company as, 'hey, we have these two apps.' We pitch the company as, 'hey, we have this data and enterprise business that's generating tons of revenue, and the way that it works is we have these two apps at the bottom that generate lots of data for us.' To get everybody on the same page was a great exercise."

The other reason employees have stuck around, Crowley believes, is because people are most satisfied with their work when they see it's having real results.

"There's not a gym and a haircut truck," he jokes. "There's a lot of people. They don't want the easier job at a big company where their role is to help push something one-tenth of 1% over the next six months. They want the harder job with the longer hours and maybe the org chart that's a little scrappier, but if they do their job right they're moving it 5%, 10%. Big meaty projects that they can own, and that's what we give people here."

Next up: an artificial assistant

marsbot.PNGCrowley's latest project is MarsBot, an artificial assistant that tracks your location and then sends you suggestions of things to do nearby.

Crowley cautions that it's not a finished product, just an early test, but I downloaded it anyway and immediately appreciated its conversational tone, although I haven't gotten any recommendations yet.

Crowley says he eventually wants to reach the same place that a lot of big companies like Google, Amazon, Facebook, Apple, and Microsoft seem to be heading: A personal assistant that can help suggest things for you before you even know you want them.

"There's so much stuff going on with chatbots, where I will text the bot and the bot will set up a haircut appointment for me. I think that's interesting and I'm excited to see where it goes, but we wanted to do the future version of that which just barely works right now, where you don't ask it things, it taps you on the shoulder."

He said he's inspired by Microsoft's original digital assistant for Office, the much-maligned Clippy, and by the Scarlett Johannsson movie "Her," in which a man falls in love with a digital assistant that basically runs all aspects of his life.

"Let's make something where you don't have to say 'hey, Siri,'" he continued.

A few seconds later, my iPhone interrupted with Siri's voice asking what we wanted.

It seemed like a good time to end the interview — and a good reminder that there's still a lot of room for improvement when it comes to technology anticipating our needs.

SEE ALSO: The story of how Travis Kalanick built Uber into the most feared and valuable startup in the world

Join the conversation about this story »

NOW WATCH: We tried NYC's best cookie according to Foursquare — here's the verdict


Artificial intelligence in medicine is promising, but doubts remain

$
0
0

Pretty woman scientist, lab, science

Scientists in Japan reportedly saved a woman’s life by applying artificial intelligence to help them diagnose a rare form of cancer.

Faced with a 60-year-old woman whose cancer diagnosis was unresponsive to treatment, they supplied an AI system with huge amounts of clinical cancer case data, and it diagnosed the rare leukemia that had stumped the clinicians in just ten minutes.

The Watson AI system from IBM matched the patient’s symptoms against 20m clinical oncology studies uploaded by a team headed by Arinobu Tojo at the University of Tokyo’s Institute of Medical Science that included symptoms, treatment and response.

The Memorial Sloan Kettering Cancer Center in New York has carried out similar work, where teams of clinicians and data analysts trained Watson’s machine learning capabilities with oncological data in order to focus its predictive and analytic capabilities on diagnosing cancers.

IBM Watson first became famous when it won the US television game show Jeopardy in 2011. And IBM’s previous generation AI, Deep Blue, became the first AI to best a world champion at chess when it beat Garry Kasparov in a game in 1996 and the entire match when they met again the following year.

From a perspective of technological determinism, it may seem inevitable that AI has moved from chess to cancer in 20 years. Of course, it has taken a lot of hard work to get it there.

But efforts to use artificial intelligence, machine learning and big data in healthcare contexts have not been uncontroversial.

On the one hand there is wild enthusiasm – lives saved by data, new medical breakthroughs, and a world of personalised medicine tailored to meet our needs by deep learning algorithms fed by smartphones and FitBit wearables.

On the other there’s considerable scepticism – a lack of trust in machines, the importance of individuals over statistics, privacy concerns over patient records and medical confidentiality, and generalised fears of a Brave New World. Too often the debate dissolves into anecdote rather than science, or focuses on the breakthrough rather than the hard slog that led to it. Of course the reality will be somewhere in the middle.

There’s not just a technical battle to win

Servers for data storage are seen at Advania's Thor Data Center in Hafnarfjordur, Iceland August 7, 2015.  REUTERS/Sigtryggur Ari

In fact, it may surprise you to learn that the world’s first computerised clinical decision-support system, AAPhelp, was developed in the UK way back in 1972 by Tim De Dombal and one of my colleagues, Susan Clamp.

This early precursor to the genius AI of today used a naive Bayesian algorithm to compute the likely cause of acute abdominal pain based on patient symptoms.

Feeding the system with more symptoms and diagnosis helped it to become more accurate over time and, by 1974, De Dombal’s team had trained the system to the point where it was more accurate at diagnosis than junior doctors, and almost as accurate as the most senior consultants. It took AAPhelp overnight to give a diagnosis, but this was on 1970s computer hardware.

The bad news is that 40 years on, AAPhelp is still not in routine use.

This is the reality check for the most ardent advocates of applying technology to healthcare: to get technology such as predictive AIs into clinical settings where they can save lives means tackling all those negative connotations and fears. AI challenges people and their attitudes: the professionals that the machine can outperform, and the patients that are reduced to statistical probabilities to be fed into complex algorithms. Innovation in healthcare can take decades.

Nevertheless, while decades apart both AAPHelp and IBM Watson’s achievements demonstrate that computers can save lives. But the use of big data in healthcare implies that patient records, healthcare statistics, and all manner of other personal details might be used by researchers to train the AIs to make diagnoses.

People are increasingly sensitive to the way personal data is used and, quite rightly, expect the highest standards of ethics, governance, privacy and security to be applied. The revelations that one NHS trust had given access to 1.6m identiable patient records to Google’s DeepMind AI laboratory didn’t go down well when reported a few months ago.

An IBM chip is seen in Kiev, Ukraine April 21, 2016.  REUTERS/Gleb Garanich/File Photo

The hard slog is not creating the algorithms, but the patience and determination required to conduct careful work within the restrictions of applying the highest standards of data protection and scientific rigour. At the University of Leed’s Institute for Data Analytics we recently used IBM Watson Content Analytics software to analyse 50m pathology and radiology reports from the UK. Recognising the sensitivities, we brought IBM Watson to the data rather than passing the data to IBM.

Using natural language processing of the text reports we double-checked diagnoses such as brain metastases, HER-2-positive breast cancers and renal hydronephrosis (swollen kidneys) with accuracy rates already over 90%. Over the next two years we’ll be developing these methods in order to embed these machine learning techniques into routine clinical care, at a scale that benefits the whole of the NHS.

While we’ve had £12m investment for our facilities and the work we’re doing, we’re not claiming to have saved lives yet. The hard battle is first to win hearts and minds – and on that front there’s still a lot more work to be done.

Owen A Johnson, Senior Fellow, University of Leeds

This article was originally published on The Conversation. Read the original article.

Join the conversation about this story »

NOW WATCH: Couples improved their sex lives in a week with this one simple tip

IGNITION 2016: IBM Watson General Manager will unveil the future of artificial intelligence

$
0
0

David Kenny, general manager for IBM Watson

We live in a world where artificial intelligence isn't science fiction — it's reality.

And, as Alex Konrad of Forbes previously reported, IBM Watson General Manager David Kenny is currently striving to make sure that AI becomes a "service" soon.

Watson is IBM's artificially intelligent supercomputer (not to mention a Jeopardy champion). According to Forbes, IBM and Kenny expect Watson to expand into a $10 billion business within the next ten years.

We're thrilled to announce that Kenny will be speaking about how AI is fueling the next massive wave of digital disruption and innovation at IGNITION 2016, Business Insider's flagship annual conference.

Kenny will be able to speak on all sorts of exciting AI developments happening at IBM. In an interview with Dean Demellweek that was published on LinkedIn, he previously revealed that Watson is currently learning all about Brexit, international law, and banking regulations. The tech leader explained that Watson will eventually to be able global companies deal with compliance questions. The supercomputer is also poised to assist healthcare and financial institutions.

Other speakers include 21st Century Fox's CEO James Murdoch, Cisco's CEO Chuck Robbins, and Adobe's CMO Ann Lewnes.

Sign up to attend IGNITION 2016, which takes December 5-7 at the Time Warner Center in New York City!

Act now — early-bird tickets are available for a limited time!

SEE ALSO: Announcing Business Insider's inaugural IGNITION UK: Future of Fintech conference!

Join the conversation about this story »

Apple completely changed how Siri works and almost nobody noticed (AAPL)

$
0
0

apple siri zoe deschanel

In summer 2014, Apple completely changed how Siri works.

The secretive Cupertino, California, company adapted Siri's voice recognition to use a cutting-edge artificial-intelligence technique called neural networks and switched it over on July 30, 2014, according to an in-depth feature by Steven Levy of Backchannel.

Neural networks is a type of AI inspired by the human brain that has become especially useful thanks to today's powerful computers. Before that, Siri recognized human voices using more rudimentary AI techniques that have been around for decades.

It was the biggest change to Siri since it launched in 2011. And almost nobody noticed — there wasn't any press coverage beyond a few people noticing that Apple had begun hiring neural-networks experts. And to users, Siri continued to work the same. It just was better at understanding what you said.

"This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn't drop a decimal place," Eddy Cue, Apple's internet services boss, told Backchannel.

In fact, Siri made half as many errors using the new neural network than it had before, according to Alex Acero, who leads the Siri speech team. "The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases," Acero said.

Apple has been criticized recently for seemingly falling behind rivals like Alphabet, Facebook, Microsoft, and Amazon at emerging AI techniques like neural networks, which is presumably why it opened the kimono and gave Levy access to many of its top AI experts.

The feature reveals several interesting facts about Apple's AI operations, including:

  • The entire size of the "AI brain" on an iPhone is about 200 MB.
  • Apple's buying a ton of small AI companies as acqui-hires — 20 to 30 companies a year, according to Cue.
  • Apple has decided to use graphics processing units for its AI. Other companies like Intel and Microsoft are pushing different approaches.
  • Apple tends not to hire established researchers for its AI efforts, instead hiring smart people and having them learn the techniques at Apple.

The entire read, on Backchannel, is illuminating and worth your time »

SEE ALSO: Apple is trying to fight Google's artificial intelligence with one hand tied behind its back

Join the conversation about this story »

NOW WATCH: Amazon has an oddly efficient way of storing stuff in its warehouses

Salesforce's next big product will be called 'Einstein' (CRM)

$
0
0

Marc Benioff

Salesforce's enormous customer conference, Dreamforce, will take place in early October and the star of the show will be a new product called Einstein, CEO Marc Benioff revealed to Forbes' Alex Konrad.

Einstein is Salesforce's attempt to add artificial intelligence to its flagship customer relationship management product. While we don't know many details about the product yet, Benioff does tend to like to tease the company's major new product releases by dribbling out tidbits before the official announcement. 

For instance, Benioff tweeted out that Einstein will be the "first comprehensive AI platform for CRM." While that sounds like a bunch of buzzword gibberish, it underscores that Einstein is the AI product that Benioff had hinted about a few months back, when he spoke to Wall Street analysts during a quarterly conference call

"We are introducing this AI wrapper," he said. "Artificial intelligence is becoming part of Sales Cloud." He said at the time, that AI will eventually be part of all of the company's apps.

It's not the first AI product that Salesforce has offered its CRM customers. That honor would go to SalesforceIQ Inbox app, a product that came from Salesforce's $390 million acquisition of RelateIQ in 2014, and that helps salespeople prioritize their inbox.

And in a sign that's not terribly great news for Einstein, the guy who was running it, the co-founder and CEO of RelateIQ, Steve Loughlin, has already announced his resignation from Salesforce, Konrad reports. He could be leaving for a VC job.

But Benioff has been saying he believes so strongly in artificial intelligence as the Next Big Thing in the tech world, that he's got others to turn to, such as Salesforce chief scientist Richard Socher. Socher is a Stanford AI researcher who co-founded a startup called MetaMind with backing from Benioff. Salesforce turned around and MetaMind outright in April, for an undisclosed sum, shuttered it and kept Socher and team on the payroll.

And Benioff has bought other AI startups, such as machine learning startup PredictionIO, smart calendar app Tempo AI. Before that, Salesforce was hiring data scientists, including poaching a number of them from LinkedIn, VentureBeat reported at the time.

SEE ALSO: Former EMC exec: Google's cloud efforts against Amazon are like 'a Microsoft phone' — too little too late

Join the conversation about this story »

NOW WATCH: EX-UNDERCOVER DEA AGENT: What I did when drug dealers asked me to try the product

IBM's Watson sorted through over 100 film clips to create an algorithmically perfect movie trailer

$
0
0

ibm-watson-trailer-morgan

Movie trailers are often a bit formulaic. In fact, since many of them are edited so predictably, it seems even a computer can put one together. 

For the film "Morgan," which is due out in theaters on September 2, IBM's Watson made the first movie trailer ever edited by artificial intelligence.

To make the film, an IBM blog post explains, Watson analyzed the trailers of over 100 horror and thriller film trailers to understand what sounds, scenes, and emotions to incorporate. The system looked at musical scores, the emotions in certain scenes (indicated by people's faces, color grading, and the objects shown), and the traditional order and composition of scenes in movie trailers.

After that, the system chose the best 10 moments for a trailer to include. Because the machine couldn't edit the film directly, the team brought in an in-house filmmaker to stitch it together. IBM says that cut the time and labor involved in the trailer- making process down from 10-30 days to 24 hours.

Appropriately, the film, which is distributed by FOX and directed by Luke Scott (Ridley Scott's son), is about an artificial human. The being ends up learning and developing too quickly for her own good and lashes out against the researchers who kept her in captivity, creating a moral quandary. 

If Watson's forays into music, Game of Thrones analysis, and cooking have been any indication, the supercomputer seems to be this decade's Renaissance man. And with this movie trailer under its belt, the AI has another feather in its cap.

Watch the trailer for the film below.

SEE ALSO: IGNITION 2016: IBM Watson General Manager will unveil the future of artificial intelligence

Join the conversation about this story »

NOW WATCH: The full trailer for the next Star Wars movie is finally here

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>