Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

The NSA Is Building An Artificial Intelligence System That Can Read Minds

$
0
0

hal 9000

The NSA is working on a computer system that can predict what people are thinking.

"Think of 2001: A Space Odyssey and the most memorable character, HAL 9000, having a conversation with David. We are essentially building this system. We are building HAL. The system can answer the question, 'What does X think about Y?'"

These are the words of an unnamed researcher who discussed an amazing artificial intelligence system she was building at the NSA.

It sounds like something right out of science fiction -- a system that can literally read thoughts like a magician.

It's called "Aquaint" (Advanced QUestion Answering for INTelligence), and PBS's James Bamford takes a stab at explaining how it works:

"As more and more data is collected -- through phone calls, credit card receipts, social networks like Facebook and MySpace, GPS tracks, cell phone geolocation, Internet searches, Amazon book purchases, even E-Z Pass toll records -- it may one day be possible to know not just where people are and what they are doing, but what and how they think."

Whether it works or not, we know that it's so intrusive that at least one researcher has quit over the idea of placing such a powerful system in the hands of the an agency with little to no accountability.

At its best, the system could become a a valuable tool used for national security and beating Watson at Jeopardy. At its worst, it sounds like something from Orwell's 1984.

Please follow SAI: Tools on Twitter and Facebook.

Join the conversation about this story »


This iPhone App Knows What You Like -- Before You Ask It A Single Question

$
0
0

Seymour CleverSense

Google CEO Larry Page and Microsoft CEO Steve Ballmer agree on one thing: the future of search is tied in with artificial intelligence.

Page has talked about the ideal search engine knowing what you want BEFORE you ask it, and Ballmer recently explained Microsoft's multibillion dollar investment in Bing by saying that search research is the best way to progress toward artificial intelligence apps that help you DO things, not just find things.

So both companies will probably be taking a very close look at Clever Sense, which launches its first iPhone app, a "personal concierge" called Alfred (formerly Seymour), today.

The app analyzes data from around the Web to figure out what you will like, based on similarities with other people. It's similar to the recommendation engines pioneered by Amazon -- "other people who bought X also bought Y" -- or the Music Genome Project that eventually grew into Pandora. Only it's applied to the real world.

Clever Sense CEO Babak Pahlavan explains that the company grew out of a research project into predictive algorithms that he was working on at Stanford three years ago. The technology crawls the Web looking for what users are saying about particular products, and is able to categorize the results into between 200 and 400 attributes and sentiments for each one.

For instance, if somebody visits a coffee shop and posts on Yelp"the cappuccino at X was awesome but salad was crap," Clever Sense understands the words "awesome" and "crap," and also notes that "cappuccino" is a high-interest word for coffee shops.

This kind of analysis is performed millions of times per day. When it launches, Alfred will have a database of more than 600,000 locations with between 200 and 400 attributes rated ON EACH ONE. As you rate places, the app will get even more accurate.

Alfred is focused on four categories -- bars, restaurants, nightclubs, and coffee shops -- but Clever Sense plans to apply its technology to other areas as well. Pahlvan explains that Clever Sense could work very well with daily deals services like Groupon, LivingSocial, or Google Offers -- instead of having merchants throw deals out to the entire world, they could target them at the users who would be most likely to buy.

At launch, the data is anonymous, but Alfred will feature FacebookConnect integration so it can add social data into its recommendations -- if it knows that a lot of your friends are saying positive things about a particular bar, it will weigh those recommendations more highly than statements from random strangers.

The company has been running on an investment of about $1.6 million from angel investors, but Pahlavan says the company is planning to raise further rounds later this year. That's assuming it doesn't get snapped up by a big company first.

Microsoft may have an inside shot -- Clever Sense is participating in the company's BizSpark program, which gives free software and other aid to startups -- but there are tons of other companies who should be interested in the technology as well.

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

You Can Learn How To Become More Rational

$
0
0

hand over mouth upset worryLessWrong is a community blog devoted to “refining the art of human rationality.”  The blog is led by artificial intelligence theorist Eliezer Yudkowsky. 

A charitable organization which Yudkowsky founded has received $1.1 million from Peter Thiel, and Yudkowsky has given a talk on rationality at Thiel’s hedge fund.

1.  Ask What Evidence Would Cause You To Change Your Mind.
Smart people can easily find reasons to support their views, so just looking at why you believe something isn’t enough to expel bias-based beliefs.  Rather, also ask yourself what it would take for you to change your mind about a proposition.  A devastating debating tactic is to ask your opponent what evidence would falsify  his belief since most people concentrate on the data that support their positions and never consider what kind of evidence might prove them wrong.

2.  Reversed Stupidity Is Not Intelligence.
Don’t think something is false because an idiot believes it to be true. 

3.  Avoid the Planning Fallacy.
Most people consistently underestimate how long it will take them to finish projects, so don’t trust your intuitive feel as to the amount of time it will take you to complete a task.

4.  Most Published Research Findings are False.
A reason why is illuminated by a top psychology journal that published a study showing the supposed existence of the ESP power precognition.  This journal, however, refused to publish a study that tried but failed to replicate the original result.  As LessWrong contributor Carl Shulman wrote: “From the journals' point of view, this (common) policy makes sense: bold new claims will tend to be cited more and raise journal status (which depends on citations per article).”

5.  Rational People Can’t Agree to Disagree.
If two rational people initially disagree then they should each use the fact of this disagreement as a reason to move towards the other person’s position.  Disagreement is disrespect because it implies that your position on a topic is more rational than the other guy’s.  

6.  Don’t Forget Tradeoffs When Choosing a Charity.
The $10,000 you donated to an art museum is $10,000 that could have gone to help desperately poor African children.  And the tradeoff exists even if you give $10,000 to both charities because you could have always donated $20,000 to the children.  

7.  You Can Face Reality.
“What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.”

The LessWrong community refers to this poem as the Litany of Gendlin.

8.  Breakthrough Your Ugh Fields.
Many of us have problems that are so unpleasant to deal with that we unconsciously flinch from even thinking about them thus causing our brains to fail us when we most need them.  You might be able to overcome this by looking for the flinch during times in which you feel generally happy.

9.  Not Facing the Truth Imposes Huge Costs.
We make decisions based upon our perception of the world.  Allowing biases to infect our understanding of reality causes us to make poor decisions.

10.  Become More Awesome.
Possible means:  master mental math, learn mnemonics, play n-back, become a lucid dreamer, learn  symbolic shorthand, study Esperanto, exercise, eat better, become a PUA (if you’re a single male), deliberately expose yourself to rejection so you become less afraid of it, learn magic tricks or juggling, memorize information using spaced repetition, understand Bayes’ theorem, become a faster typer, challenge your senses by wearing a blindfold, eye patch, or colored goggles, stop using your dominant hand for a week, learn self-defense, or get trained in First Aid. 

Please follow War Room on Twitter and Facebook.

Join the conversation about this story »

This Is What Happens When Two Computers Talk To Each Other

Artificial Intelligence Pioneer John McCarthy Has Died

Artificial Intelligence Took America's Jobs And It's Going To Take A Lot More

$
0
0

viki irobot

It's not just the recession that killed American employment.

And it's not just robots taking jobs from autoworkers.

The scary new dimension is artificial intelligence, which could replace as many as 50 million professional jobs according to a recent book by software entrepreneur Martin Ford. Because let's face it, Siri or something like Siri has more potential than your secretary, law clerk, radiologist or fund manager.

The Economistdiscusses this trend in a chilling essay. Here's an excerpt:

America's current employment woes stem from a precipitous and permanent change caused by not too little technological progress, but too much. The evidence is irrefutable that computerised automation, networks and artificial intelligence (AI)—including machine-learning, language-translation, and speech- and pattern-recognition software—are beginning to render many jobs simply obsolete.

This is unlike the job destruction and creation that has taken place continuously since the beginning of the Industrial Revolution, as machines gradually replaced the muscle-power of human labourers and horses. Today, automation is having an impact not just on routine work, but on cognitive and even creative tasks as well. A tipping point seems to have been reached, at which AI-based automation threatens to supplant the brain-power of large swathes of middle-income employees.

Read the rest >

Don't miss: 10 American Industries That Will Be Destroyed In The Next Decade >

Please follow Money Game on Twitter and Facebook.

Join the conversation about this story »

How We Got Siri: The History Of Talking To Machines (AAPL)

$
0
0

metropolis

Siri represents a mainstream success in getting people to communicate with their electronics, but it's hardly the first time someone talked to a computer.

Siri has plenty of ancestors dating all the way back to the 1960s, before anyone even heard the term "artificial intelligence."

Did you know that the first program that understood typed English was created as a means to provide therapy to the user?

How intelligent can software become? Can it ever be "alive?"

The ability to talk to machines raises all of these questions and more. We caught an excellent episode of Radiolab on WNYC that tackles all of them.

Conversation is endlessly complex, but computers can be programmed to understand it

Language is totally nuts. Consider things like grammar, syntax, tone, and sarcasm. In order for a computer to communicate seamlessly with a user, it should understand all these aspects of language.

And here's the scary thing: the best programmers can make this happen.



It's not easy though

It's pretty amazing that a computer can turn something as complex as human speech into ones and zeroes.

Even the most basic human actions are made up of countless subroutines. If we want to put on a hat, for example, we just put it on. But for a computer to understand "put on a hat," it has to know what a hat is. Then it has to "locate the hat,""pick up the hat," and "place hat on head."

Well-programmed chatbots can understand loads of these primitive elements of language.



This guy started the "talking to computers" thing

The idea of personally communicating with a computer started in 1966.

MIT professor Joseph Weizenbaum became aware of "non-directive Rogerian therapy." It was a system in which a therapist would identify key words that a patient used and repeat them back.

For example, a patient might say, "I'm feeling depressed today." The therapist would say, "I'm sorry to hear you're feeling depressed.

Weizenbaum thought that behavior would be easy enough to program, so he did. He called it ELIZA, and it changed everything.



See the rest of the story at Business Insider

Please follow SAI: Tools on Twitter and Facebook.

ART CASHIN: We May Have Just Witnessed The Presence Of Artificial Intelligence In The Stock Market

$
0
0

terminator robot

Wednesday morning, there was a violent sell-off in stocks that seemed to start exactly at 10:00 AM.  This had some New York Stock Exchange floor traders scratching their heads.

Art Cashin, UBS Financial Services' director of floor operations, caught some of the trader chatter and reported it in this morning's Cashin's Comments.

It seems that the supernatural speed of the sell-off had traders thinking two words: artificial intelligence.

Here's an excerpt from this morning's Comments:

Algo My Way By Myself Or Open The Pod Doors, HAL - As noted, the instantaneous nature of the selloff around 10:00 raised lots of questions.  First of all, Mr. B does not usually mention QE3.  He might vaguely allude to further easing and other such code phrases.  Further, if it was a quick review of the prepared remarks by thousands of traders, how could the reaction be so instantaneous and uniform?

Those questions prompted an intriguing hypothesis that began circulating in early afternoon. The hypothesis postulated that the speech had been instantly parsed by a computer using artificial intelligence.

Recall that for several months now the Fed has been stressing its “dual mandate”.  It stressed that it couldn’t be tied down to worrying about inflation and a firm dollar with the economy still staggering.  The very weak labor market meant that they had to keep stimulating under its charge in the “dual mandate”.

Now, if you pick up your copy of Bernanke’s prepared statement, please read the first four paragraphs.  Note that paragraph three begins:

We have seen some positive developments in the labor market. Private payroll employment has increased by 165,000 jobs per month on average since the middle of last year, and nearly 260,000 new private-sector jobs were added in January. The job gains in recent months have been relatively widespread across industries….

Then paragraph four starts out:

The decline in the unemployment rate over the past year has been somewhat more rapid than might have been expected, given that the economy appears to have been growing during that time frame at or below its longer-term trend….

The jobs portion of the dual mandate looked suddenly less compelling.  You wouldn’t need much artificial intelligence to see that quickly and clearly.

So was the selloff started by someone’s version of HAL 9000?  We don’t know for sure.  There are said to be such experiments on trading desks at hedge funds and elsewhere.  And, it certainly fits the action to a tee.  We’ll check around the watering holes.

Please follow Clusterstock on Twitter and Facebook.

Join the conversation about this story »


ART CASHIN: Robots May Have Been Responsible For A Rare Synchronization In The Markets Yesterday

$
0
0

robot, asia

We live in an interesting time, don't we?  Everything used to be explained by Newton's laws of physics.  Then it was Einstein's theory of relativity.  Fold into that some Freud and modern economics.

And now it's robots.

Last week, Art Cashin speculated that artificial intelligence explained an unusually speedy reactionary stock market sell-off.

Today, he's noting that machines may once again explain an unusual synchronization in the financial markets yesterday.

From this morning's Cashin's Comments:

U.S. stocks spent the balance of the day trying, unsuccessfully to get to one knee.  By the close, the uniformity of the negative influence was evident.  The Dow was down 1.57%; the S&P -1.54%; Crude -1.6% and gold -1.7%.  That’s a remarkably tight uniformity in a group of widely different assets.  Such a cookie cutter result suggests the very strong influence of computer trading.

Please follow Clusterstock on Twitter and Facebook.

Join the conversation about this story »

Meet Siri's Less Popular Sister: Trap.it

$
0
0

Hank Nothhaft

While Siri's sister company can't talk back to you, it is just as smart.

Like Siri, Trap.it came out of a $200 million program funded by DARPA.

The five-year program was called CALO, which is short for Cognitive Assistant That Learns and Organizes. 

Siri was acquired by Apple in April 2010.

Trap.it, on the other hand, is still a startup. 

It's a web-based personalized news reader. It lets you put in URLs of your favorite web pages, then figures out other sites to recommend for you based on your tastes.

Cofounder Hank Nothhaft told us, "The Internet isn't as fun and accessible as it was before. The Internet became Facebook, Twitter, and Pinterest. It is our gateway to the Web, but so much content is not surfaced. The Internet is more than pictures of friends at a party."

Trap.it looks at 100,000 blogs, Twitter feeds, and publishers. It's not SEO garbage, Nothhaft said, and uses human curators to make sure the content that they are serving up is good.

"Without high quality content, there isn't much to consume. These days, people don't make the distinction between newspaper, magazine, and social. We just want it in a compelling format and want to be able to share it and socialize around it." Nothhaft said.

"It will build an interest model and rate more content as I go. It searches the Web on my behalf. Every morning, I get an email briefing of relevant articles," Nothhaft said.

Unlike Flipboard and other social magazines that are pretty RSS feeders, Nothhaft said, Trap.it actually has a recommendation engine -- similar to Zite, which was bought by CNN for a rumored price of more than $20 million.

While Trap.it started on the Web, the company plans on launching an iPad app in May. 

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

MBA Students Tried To Figure Out What IBM's Watson Should Be Used For, And Here's What They Came Up With

$
0
0

ibm watson

The folks at IBM are still trying to figure out what exactly to do with their amazing artificial intelligence computer Watson, and they're looking to all kinds of sources for inspiration.

One of those sources is academia.

MBA students at the University of Rochester recently participated in the first-ever Watson academic case competition. They were asked to figure out ways to "solve societal and business challenges using Watson,"reports Ariel Schwartz at Fast Company.

Here's what they came up with:

1st place: Managing Data in the Eye of a Storm—Since Watson is incredibly effective at analyzing both unstructured and structured information, it could be used to identify weather patterns. It would combine weather data and the latest numbers from the census to help companies with crisis management.

2nd place: Mining for Insights, Literally—Watson's cognitive reasoning capabilities could be used to help energy companies figure out their environmental impact. It would bring together economic, health, safety reports, and more to find levels of risk and reward.

3rd place: Unpacking Big Data Improves Travel Experience—Watson could be used to reduce wait times, congestion, and improve the travel experience by analyzing tons of unstructured information. 

IBM plans to repeat the case competition format at other universities, reports Jack Seward at Innovation Trail.

Currently, Watson is only being used in a pilot program at health insurer WellPoint, but all these new ideas signal that there's definitely potential for IBM to make an impact with its AI technology.

NOW SEE: 11 Groundbreaking Inventions From 2011 >

Please follow War Room on Twitter and Facebook.

Join the conversation about this story »

Facebook's Cofounder And First Investor Are Trying To Replicate The Human Brain

$
0
0

Dileep George Vicarious

Startup Vicarious is trying to build a computer with the human ability to see and perceive.

It's a really hard artificial-intelligence problem that once solved, has immense potential to change the world.

And this particular startup just got an immense boost from some big-names in the tech industry. Its backers include Facebook and Asana cofounder Dustin Moskovitz. His venture-capital firm, Good Ventures, has led a $15 million round; it's joined by Peter Thiel's Founders Fund and Open Field Capital, as well as some angel investors.

Moskovitz and Founders Fund were also seed investors (among others) when the company launched two years ago.

Getting a computer (or robot) to capture an image is obviously easy. But getting the computer to truly see that image—to intelligently understand what that image means, as a human does, is insanely problematic, Vicarious cofounders Dileep George and Scott Phoenix told BI.

"The biggest problem in robotics is perception," George said. "They can make things that walk but they can't make things that see and understand what is in front of them. So we are solving the perception part."

George and Phoenix want to replicate the part of the brain called the neocortex, which commands higher functions such as sensory perception, spatial reasoning, conscious thought and language. In other words, they are trying to create "intelligence."

They are making progress, too. They've got a technology they call the Recursive Cortical Network, or RCN.

"Anything humans do right now with their eyes would be something that we could do in an automated way," says Phoenix. "Is this cancer or not? Is this heart disease or not? Is there a manufacturing defect in any of these things? Take a picture of your dinner plate—how many calories are in it?"

Eventually RCN will lead to commercial products and services. Products based on the tech are "five years out, at least," says Phoenix, adding that the plan is to release some kind of software platform "upon which an enormous number of things we haven't even thought of can be built."

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

Cambridge University To Study Possibility That Technology Could Ultimately Destroy Human Civilization

$
0
0

terminator2A new Cambridge research center will assess whether advanced technology could destroy human civilization, Brid-Aine Parnell of The Register reports.

The Centre for the Study of Existential Risk (CSER) will analyze risks of biotechnology, artificial intelligence, nanotechnology, nuclear war and climate change to the future of mankind.

Co-founders Jaan Tallinn—one of the founders of Skype—and philosopher Huw Price believe that the evolution of computing complexity will eventually lead to artificial general intelligence (AGI), which will eventually be able to write the computer programs and create the tech to develop its own offspring.

"It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Price said in a press release. "We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous."

CSER will ask experts from policy, law, risk, computing and science to advise the center and help with investigating the risks.

One risk, according to Price, is that advanced technology could become a threat when computers start to direct resources towards their own goal at the expense of human desires.

“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”

SEE ALSO: Harvard Scientists Create 'Cyborg' Flesh That Blurs The Line Between Man And Machine >

Please follow Business Insider on Twitter and Facebook.

Join the conversation about this story »

It Was A Bad Idea For Watson The Supercomputer To Learn The Urban Dictionary

$
0
0

ibm watson

"OMG, that's a bullshit question. Eat one, you doofus."

One as-yet-to-be-determined day in the future, computers will talk back to us in this manner.

But, thankfully, it would appear that Computers with Attitude are not yet on the near-horizon.

Eric Brown, the IBM researcher charged with training Watson, the supercomputer that famously beat human all-comers in the US quiz show Jeopardy in 2011, has provided an interesting insight into just how hard it is to crack the ever-elusive nut of artificial intelligence (AI).

Speaking to Fortune magazine, Brown said that Watson can readily absorb information far beyond the capacity of any human, but where it struggles is understanding our subtlety of language, particularly the human predilection for slang.

"As humans, we don't realise just how ambiguous our communication is," he said. (To be fair, most users of Apple's Siri have largely deduced this already.)

To test Watson's skills at understanding slang, Brown instructed the supercomputer to digest the Urban Dictionary, the popular website that provides definitions for thousands of slang words, including ones of a particularly profane nature. In one test, Watson mistakenly used the word "bullshit" to answer one of Brown's queries. The Urban Dictionary has now been deleted from Watson's memory.

"Computers are now incredibly impressive at what I call 'micro smarts', namely, very specific tasks involving encyclopedic levels of data but with clearly defined rules," says Nigel Shadbolt, professor of artificial intelligence at the University of Southampton.

"That's why Watson was able to win a quiz show such as Jeopardy. But what they still struggle with is knowing how to behave in a generalised situation."

Humans are "superb" at switching rapidly between rules of engagement," he adds. "We live in a mass soup of cultures, rules and contexts. Words such as 'wicked' and 'decent' now take on different meanings to different people.

"In some ways, to ask a computer to know how to use a word correctly in various contexts is the ultimate challenge. The inoffensive use of the term 'bullshit' in the right context is sometimes quite hard to judge for humans. Even prime ministers famously struggle to grapple with the use, or even meaning, of slang terms such as 'LOL'."

This article originally appeared on guardian.co.uk

SEE ALSO: This Supercomputer Is The World's Most Powerful Hurricane Prediction Machine

Please follow Science on Twitter and Facebook.

Join the conversation about this story »

What The Human Mind Has That A Robot Could Never Match

$
0
0

I just got done reading Ray Kurzweil's How to Create a Mind, his latest on how machines will soon (2030ish) pass the Turing test, and then basically become like robots envisaged in the 60's, with distinct personalities, acting as faithful butlers to our various needs.

And then, today over on The Edge, Bruce Sterling is saying that's all a pipe dream, computers are still pretty dumb.  As someone who works with computer algorithms all day, I too am rather unimpressed by a computer's intelligence.

He also notes that IBM's Watson won a Jeapardy! contest by reading all of Wikipedia, a feat clearly beyond any human mind. Further, as Kurzweil notes, many humans are pretty simple, and so it's not inconceivable a computer can replicate your average human, if only average is pretty predictable. Sirri is already funnier than perhaps 10% of humans.

But I doubt they will ever approximate a human, because human's have what machines can't have, which is emotions, and emotions are necessary for prioritizing, and a good prioritization is the essence of wisdom.  One can be a genius, but if you are focused solely on one thing you are autistic, and such people aren't called idiot-savants for nothing.

Just as objectivity is not the result of objective scientist, but an emergent result of the scientific community, consciousness may not be the result of a thoughtful individual, but a byproduct of a striving individual enmeshed in a community of other minds, each wishing to understand the other minds better so that they can rise above them. I see how you could program this drive into a computer, a deep parameter that gives points for how many times others call their app, perhaps.

Kurzwiel notes that among species of vole rats, those that have monogamous bonds have oxytocin and vasopressin receptors, and those that opt for one-night stands do not. Hard wired emotions dictate behavior.  But it's one thing to program a desire for company, an aversion to loneliness, another to desire a truly independent will.

Proto humans presumably had the consciousness of dogs, so something in our striving created consciousness incidentally. Schopenhauer said "we don't want a thing because we have found reasons for it, we find reasons for it because we want it." The intellect may at times to lead the will, but only as a guide leads the master. He saw the will to power, and fear of death, as being the essence of humanity.  Nietzsche noted similarly that "Happiness is the feeling that power increases." I suppose one could try to put this into a program as a deep preference, but I'm not sure how, in that, what power to a computer could be analogous to power wielded by humans?

Kierkegaard thought the crux of human consciousness was anxiety, worrying about doing the right thing.  That is, consciousness is not merely having perceptions and thoughts, even self-referential thoughts, but doubt, anxiety about one's priorities and how well one is mastering them. We all have multiple priorities--self preservation, sensual pleasure, social status, meaning--and the higher we go the more doubtful we are about them. Having no doubt, like having no worries, isn't bliss, it's the end of consciousness.  That's what always bothers me about people who suggest we search for flow, because like good music or wine, it's nice occasionally like any other sensual pleasure, but only occasionally in the context of a life of perceived earned success.

Consider the Angler Fish. The smaller male is born with a huge olfactory system, and once he has developed some gonads, smells around for a gigantic female. When he finds her, he bites into her skin and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. He is then fed by, and has his waste removed by, the female's blood supply, as the male is basically turned into a parasite. However, he is a welcomed parasite, because the female needs his sperm. What happens to a welcomed parasite? Other than his gonads, his organs simply disappear, because all that remains is all that is needed. No eyes, no jaw, no brain. He has achieved his purpose, and could just chill in some Confucian calm, but instead just dissolves his brain entirely.

A computer needs pretty explicit goals because otherwise the state space of things it will do blows up, and one can end up figuratively calculating the 10^54th digit of pi--difficult to be sure, and not totally useless, but still pretty useless.  Without anxiety one could easily end up in an intellectual cul-de-sac and not care.  I don't see how a computer program with multiple goals would feel anxiety, because they don't have finite lives, so they can work continuously, forever, making it nonproblematic that one didn't achieve some goal by the time one's eggs ran out.  Our anxiety makes us satisfice, or find novel connections that do not what we originally wanted but do what's very useful nonetheless, and in the process helped increase our sense of meaning and status (often, by helping others).

Anxiety is what makes us worry we are at best maximizes an inferior local maximum, and so need to start over, and this helps us figure things out with minimal direction.  A program that does only what you tell it to do is pretty stupid compared to even stupid humans, any don't think for a second neural nets or hierarchical hidden markov models (HHMMs) can figure stuff out that isn't extremely well defined (like figuring out captchas, where Kurzweil thinks HHMMs show us something analogous to human thought).

Schopenhauer, Kierkegaard, and Nietzsche were all creative, deep thinkers about the essence of humanity, and they were all very lonely and depressed. When young they thought they were above simple romantic pair bonds, but all seemed to have deep regrets later, and I think this caused them to apply themselves more resolutely to abstract ideas (also, alas, women really like confidence in men, which leads to all sorts of interesting issues, including that their doubt hindered their ability to later find partners, and that perhaps women aren't fully conscious (beware troll!)). Humans have trade-offs, and we are always worrying if we are making the right ones, because no matter how smart you are, you can screw up a key decision and pay for it the rest of your life. We need fear, pride, shame, lust, greed and envy, in moderation, and I think you can probably get those into a computer.  But anxiety, doubt, I don't think can be programmed because logically a computer is always doing the very best it can in that's its only discretion is purely random, and so it perceives only risk and not uncertainty (per Keynes/Knight/Minsky), and thus, no doubt. 

Please follow Business Insider on Twitter and Facebook.

Join the conversation about this story »


IBM Has Sent A Supercomputer To College To Learn How To Be More Human (IBM)

$
0
0

ibm watson

TROY, N.Y. (AP) — Watson, the supercomputer famous for beating the world's best human "Jeopardy!" champions, is going to college.

IBM is announcing Wednesday that it will provide a Watson system to Rensselaer Polytechnic Institute, the first time the computer is being sent to a university. Just like the flesh-and-blood students who will work on it, Watson is leaving home to sharpen its skills. Course work will include English and math.

"It's a big step for us," said Michael Henesey, IBM's vice president of business development. "We consider it absolutely strategic technology for IBM in the future. And we want to evolve it, of course, thoughtfully, but also in collaboration with the best and brightest in academia."

Watson is a cognitive system that can process massive amounts of data, including natural language. To beat "Jeopardy!" champions in 2011, it was fed the contents of encyclopedias, dictionaries, books, news dispatches and movie scripts. For its medical work, it takes in medical textbooks and journals. After it takes in data, Watson can provide information like a "Jeopardy!" answer, a medical diagnosis or an estimate of financial risk.

IBM, which provided a grant to RPI to operate Watson for three years, sees it as a way to help it boost the computer's cognitive capabilities.

Artificial intelligence researchers at RPI want to do things like improve Watson's mathematical ability and help it quickly figure out the meaning of new or made-up words. They want to improve its ability to handle the torrent of images, videos and emails on the Web, the sort of unstructured information that is overwhelmingly fueling the data boom.

For Selmer Bringsjord, who heads RPI's department of cognitive science, getting a crack at Watson is like a car aficionado being tossed the keys to a souped-up Lamborghini. Bringsjord said he and his graduate students could potentially focus on providing Watson with a deeper understanding of the structure of sentences and how dialogues unfold.

"If I can make a tiny, tiny contribution in that direction, given how historic the system is, I'd be very happy and I think my graduate students would be as well," Bringsjord said.

The original Watson remains at IBM's Research Headquarters in Westchester County, about 100 miles south of the school. RPI has hardware fully dedicated to running the system's software at its supercomputing center in the Rensselaer Tech Park near the school. RPI's version of Watson has 15 terabytes of memory, enough to store a massive library. It will allow 20 users to access the system at once.

IBM has worked collaboratively with other outside institutions on Watson, such as Memorial Sloan-Kettering Cancer Center in New York City, New York-based Citigroup Inc. and the Cleveland Clinic. But this is the first time hardware fully dedicated to running the Watson software is being installed at a college.

Officials with IBM and RPI say Watson's college tenure also will prepare RPI students for jobs in cognitive science and "big data," a field where demand is quickly outpacing supply. John Kolb, RPI's chief information officer, said he would like the next generation of the school's technology graduates "to help IBM take Watson to the next level."

Please follow SAI: Enterprise on Twitter and Facebook.

Join the conversation about this story »

Google Has Bought A Startup To Help It Recognize Voices And Objects (GOOG)

$
0
0

sergey brin and google glassGoogle has bought a three-person Canadian startup, DNNresearch, based in Toronto, TechVibes reports.

The "DNN" in its name stands for "deep neural networks." That's a contemporary approach to designing artificially intelligent systems which requires less work to "train" the systems.

Google archrival Microsoft has already rolled out deep-neural-network technology in some of its audio- and video-indexing systems.

DNNresearch is led by Geoffrey Hinton, a professor at the University of Toronto who will split time between the university and Google's Toronto office and Mountain View, Calif. headquarters. Two of Hinton’s graduate students, Alex Krizhevsky and Ilya Sutskever, will relocate to California. Google had previously backed their research with a $600,000 grant, TechVibes reports.

Where their technology, which delivers improved recognition of nearby objects and images, could come in handy is with Google Glass, the experimental Internet-connected headset, and Google Now, Google's predictive-assistance technology which attempts to deliver answers without forcing users to enter queries.

In other words, Google's already pretty smart—but deep-neural-network techniques could make it far, far smarter.

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

Harvard Cognitive Scientist Shares Fascinating Insights About The Human Mind

$
0
0

Steve Pinker

The more we learn about the human mind, the more fascinating it becomes, according to Harvard cognitive scientist Steve Pinker. He's known for his research in evolutionary psychology, or the idea that human nature has adapted over time to improve chances of survival.

He's written some bestselling books "How The Mind Works," and "The Blank Slate," and last night answered readers' questions during a fascinating Reddit "Ask Me Anything". We've included some of the highlights below:

On how language is the most fascinating thing about the human mind:

Here we all are, banging at keyboards and reading squiggles on screens, and somehow we're exchanging ideas about consciousness, hunter-gatherer societies, rape, the meaning of life, and hair-care products (I'll get to that). Of course we're using written language, not to mention computer technology and the Internet, but we could be having the same conversation at a bar, dinner table or seminar room, so it's language itself that is the astounding phenomenon.

On how understanding the mind affects happiness:

I find a naturalistic understanding of human nature to be indispensable to leading a wise and mature life, and it is often exhilarating. Wisdom consists in appreciating the preciousness and finiteness of our own existence, and therefore not squandering it; of being cognizant of what makes people everywhere tick, and therefore enhancing happiness and minimizing suffering; of being alert to limitations and flaws in our own judgments and decisions and passions, and thereby doing our best to circumvent them.

On if society will ever develop a better theory for consciousness:

As for the strange problem of consciousness — whether the red that I see is the same as the red that you see; whether there could be a "zombie" that is indistinguishable from you and me but not conscious of anything; whether an upload of the state of my brain to the cloud would feel anything — I suspect the answer is "never," since these conundra may be artifacts of human intuition. Our best science tells us that subjectivity arises from certain kinds of information-processing in the brain, but why, intuitively, that should be the case is as puzzling to us as the paradoxes of quantum mechanics, relativity, and other problems that are far from everyday intuition.

On why AI will never match human intelligence:

We do have a decent understanding of consciousness in the sense of why an intelligent system might make available a pool of information to a variety of its modules while keeping other information encapsulated within those modules. The only sense of consciousness we don't understand is whether the artificially intelligent computer or robot we build would subjectively feel anything — but that has nothing to do with how we built it. That's why the problem is "strange."

On atheism and why religion is a "puzzle in psychology":

Atheism is simply the denial of one set of beliefs, and it has never been a priority to stipulate one among the many things I don't believe in. The atheist/humanist/freethinker/secularist/bright movement found me (and I'm happy to support it) because I presented a thoroughly naturalistic, ghost-free account of the mind in How the Mind Works, including an analysis of religious belief as an interesting puzzle in psychology.

On the hypothesis that depression is an adaptation:

I don't know the literature well enough to say, but it's not implausible that occasional, mild, temporary depression in response to an identifiable setback is an adaptation — the main reason being the phenomenon of depressive realism, namely the more accurate assessment of outcomes and probabilities among the (mildly, temporarily) depressed than among happy people. Clinical depression is another story.

SEE ALSO: More Proof That IQ Levels Are Rising

Please follow Science on Twitter and Facebook.

Join the conversation about this story »

This Video Of A Robot, Hooded And Chained By The Neck As Humans Test It, Will Tug At Your Soul

$
0
0

Petman robot

Normally, when we talk about what advanced robots might do in the future, we worry that they might harm us. What if machines with artificial intelligence decide that we humans are getting it wrong, and supplant their judgment — and strength, and lack of feelings — for our own?

This video (below) of "Petman," a robot being developed by Boston Dynamics for use in testing protective clothing for soldiers under fire from chemical weapons, will make you realize that there's an alternative dystopian future for us and robots: One in which they become a vast underclass of mindless slaves, whom we humiliate with drudgery and danger.

I'm anthropomorphizing, of course. Robots are just machines like TVs and phones. Petman doesn't know that he's being chained by the neck, with a hood over his head like a terrorist suspect, as his humans test him on a treadmill.

And he'll probably fall over if the chains are taken off, as TechCrunch points out.

But the image of this robot being forced to run, blind and in shackles, is depressing nonetheless because it says so much about what we humans might do with sophisticated, lifelike machines, given a chance (and a windowless room).

The humiliation continues in this video, where Petman is required to dance to "Stayin' Alive":

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

Google's Big Chinese Competitor, Baidu, Is Setting Up Shop In Google's Backyard (GOOG)

$
0
0

baiduceorobinli.jpg

About six miles south of the Googleplex in Mountain View, Calif., is the town of Cupertino. And that's where Chinese search company Baidu is opening a new office that will be home to something called The Institute of Deep Learning.

There, Baidu will be trying to build computers that mimic the human brain, thinking and learning like humans do, reports Wired's Daniela Hernandez.

Baidu is certainly not alone in the quest for more human-like computers. Google is obviously working on this too. About five months ago, Google nabbed one of the world's best-known advocates in the area of artificial intelligence when it hired legendary "futurist" Ray Kurzweil as its director of engineering.

Kurzweil is working on a similar project for Google. “We want to give computers the ability to understand the language that they’re reading," Kurzweil explained in an interview with Singularity Hub.

Baidu's research team leader Kai Yu is pretty straightforward about why the Chinese company chose Cupertino: he's trying to keep talented engineers out of Google's grasp.

“In Silicon Valley, you have access to a huge talent pool of really, really top engineers and scientists, and Google is enjoying that kind of advantage,” Yu told Hernandez.

This isn't the only Google-like project Baidu is working on either. Earlier this month, it confirmed rumors that it is developing a Google Glass-like wearable computer code-named ‘Baidu Eye’, the Next Web reported. It also said it has no plans yet to launch it as a consumer product.

SEE ALSO: Everything We Know About Microsoft's Next Version Of Windows, 'Windows Blue'

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>