Archives for category: Uncategorized

Apple Pay’s release today reminds me of IBM’s commercial back in 2006. I remember watching this over and over again when I was in middle school. Everything seems to be going the same direction in today’s market with NFC and RFID technolgy. IBM awesome.

Diego

October 06, 2014 3:28 AM ET

In the U.S., people born between 1980 and 2000 now outnumber baby boomers, and their numbers are still growing because of immigration. This generation is already shaping American life, and in a series of stories — largely reported by millennials themselves — NPR will explore how this new boom is transforming the country.

There are more millennials in America right now than baby boomers — more than 80 million of us.

And I’m gonna go ahead and guess that if you’re not a millennial, you kind of hate us.

We seem so lazy, so entitled. We still live with our parents. We love our selfies and we’re always talking about ourselves.

But, here’s my case: Millennials have already shaped your life.

The Millennial World

Let me start with those little screens we’re always on: Millennials aren’t simply users of social media. We invented it.

Mark Zuckerberg, along with the inventors of Instagram and Tumblr andSnapchat, are all millennials and all millionaires. Oh, actually, Zuckerberg is worth billions.

Millennials were there first. We picked it out and showed everybody else how to use it.

These tools have also transformed some of the most important stories in the news.

So we’re all already living in a millennial world. It’s connected. It’s open.

And it’s diverse.

“Forty-three percent of millennials are nonwhite,” says Eileen Patten, a research analyst at the Pew Research Center (and a millennial herself). “When we look at older generations — boomers and silents — less than 3 in 10 were nonwhite.”

Because millennials look different en masse than generations past, the future is going to look different too. They’ve already led the country to massive shifts in opinion on social issues over the past decade.

“They’ve led the way in terms of same-sex marriage and marijuana legalization — majorities favor both,” Patten says. “They support granting citizenship to unauthorized immigrants — about half do — compared with lower shares among the older generations.”

As a whole, millennials are progressive and accepting. And for all you’ve heard about crippling student debt, high unemployment, “failure to launch” — we’re hopeful.

“[Millennials] are optimistic about their financial futures,” Patten says.

Try Something Else

The recession hit when many millennials were at the launch point of their careers.

One of them was Ryan Koo. He got a bachelor of arts studying film in 2003, and got a job working at MTV in New York City. “I got laid off along with 700 other people on the same day at the end of 2008,” he says.

So he moved home to Durham, N.C., and tried something else.

“I started No Film School just as a personal blog,” Koo says. “My startup costs were like $600 I think.”

Today the ads pay his New York rent. He raised $125,000 on Kickstarter for his first feature film and got grants from more old-school places like the Tribeca and Sundance film festivals.

Koo is one of many millennials who feel like they can make something happen for themselves.

“Thirty-two percent say they currently earn enough to lead the kind of life they want. And 53 percent say they don’t, but they think they will in the future,” Patten says.

That includes the millions of millennials who are still in school, including Kyla Marrkand. She’s a high school senior at Bell Multicultural High School in Washington, D.C. She knows all about the tough economy and she’s realistic, but she believes it’s going to go well for her.

“Everybody doesn’t have the drive,” she says. “I have the drive.”

The New Boom

We millennials have drive. We are optimistic.

There are more than 80 million of us.

Which is why the millennials at NPR are reporting on our own generation for a series we’re calling #NewBoom.

We won’t be rehashing stereotypes. We won’t be dismissive or flip. Because if we — millennials and nonmillennials alike — are going to understand the future of the country, we need to understand this generation.

Millennials have already steered the country to a place where diplomats tweet, gay marriage is turning mainstream, and running a blog can be more financially secure than a company gig.

If we’ve done all that before 35, get ready.

These images show how awesome humans can be , or how cruel , the world we live in is not simple or perfect , the pictures below show our true colors , for good and for worse 

 

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

amazing people photos

 

Enjoy

Last updated Oct 3, 2014, 7:19 AM PST

Facebook says it will change the way it conducts research on users of the social network Facebook said it will change the way it does research, but stopped short of apologising for a controversial experiment it conducted this year.

In June, the site was criticised for manipulating the news feeds of nearly 700,000 users without their consent.

The network said it was “unprepared” for the backlash it received.

“[We] have taken to heart the comments and criticism. It is clear now that there are things we should have done differently,” Facebook said.

In a blog, chief technology officer Mike Schroepfer said the company should have “considered other non-experimental ways to do this research”.

He added: “In releasing the study, we failed to communicate clearly why and how we did it.”

The social network controlled the news feed of users over a one-week period in 2012 without their knowledge to manage which emotional expressions they were exposed to.

The experiment was part of a study by Facebook and two US universities. The social network said at the time it was to gauge whether “exposure to emotions led people to change their own posting behaviours”.

However, the company was widely criticised for manipulating material from people’s personal lives in order to play with user emotions or make them sad.

In response on Thursday, Facebook said that it was introducing new rules for conducting research on users with clearer guidelines, better training for researchers and a stricter review process.

But, it did not state whether or not it would notify users – or seek their consent – before starting a study.

The Information Commissioner’s Office (ICO) in London, which supports data privacy for individuals, said Facebook’s comments were “a step in the right direction”, but it hoped to hear more about how the social network intends to improve transparency.

“Organisations who want to process people’s personal information without explicitly asking for their permission, for instance to carry out research, always need to proceed with caution,” an ICO spokesman said.

Should Facebook apologise?
IDC research analyst Jan van Vonno said it was Facebook’s responsibility to notify users of any studies they were partaking in.

“They’re going to continue that research and what they should do is make users aware of what they’re doing and that’s not really what they’re doing right now,” Mr van Vonno said.

An apology would be a sign of regret and they obviously don’t regret any of their actions because they think it’s for the benefit of their own platform.”

It was still important for Facebook to study consumer behaviour so it could maximize the impact advertisers had on the platform, which remains a huge source of revenue for the company, Mr van Vonno added.

The company’s mobile advertising revenue jumped 151% in the second quarter of this year from 2013 and accounted for more than 60% of its overall ad revenue.

Just this week, Facebook relaunched Atlas, an advertising platform it bought from Microsoft last year, to improve the effectiveness of its ads.

BBC © 2014

The Atlantic By Sara M. Watson
July 1, 2014 11:39 AM

Facebook has always “manipulated” the results shown in its users’ News Feeds by filtering and personalizing for relevance. But this weekend, the social giant seemed to cross a line, when it announced that it engineered emotional responses two years ago in an “emotional contagion” experiment, published in the Proceedings of the National Academy of Sciences (PNAS).

As a society, we haven’t fully established how we ought to think about data science in practice. It’s time to start hashing that out.

Before the Data Was Big…

Data by definition is something that is taken as “given,” but somehow we’ve taken for granted the terms under which we came to agree that fact. Once, the professional practice of “data science” was called business analytics. The field has now rebranded as a science in the context of buzzwordy “Big Data,” but unlike other scientific disciplines, most data scientists don’t work in academia. Instead, they’re employed in commercial or governmental settings.

The Facebook Data Science team is a prototypical data science operation. In the company’s own words, it collects, manages, and analyzes data to “drive informed decisions in areas critical to the success of the company, and conduct social science research of both internal and external interest.” Last year, for example, it studied self-censorship—when users input but do not post status updates. Facebook’s involvement with data research goes beyond its in-house team. The company is actively recruiting social scientists with the promise of conducting research on “recording social interaction in real time as it occurs completely naturally.” So what does it mean for Facebook to have a Core Data Science Team, describing their work—on their own product—as data science?

Contention about just what constitutes science has been around since the start of scientific practice. By claiming that what it does is data science, Facebook benefits from the imprimatur of an established body of knowledge. It looks objective, authoritative, and legitimate, built on the backs of the scientific method and peer review. Publishing in a prestigious journal, Facebook legitimizes its data collection and analysis activities by demonstrating their contribution to scientific discourse as if to say, “this is for the good of society.”

So it may be true that Facebook offers one of the largest samples of social and behavioral data ever compiled, but all of its studies—and this one, on social contagion—only describe things that happen on Facebook. The data is structured by Facebook, entered in a status update field created by Facebook, produced by users of Facebook, analyzed by Facebook researchers, with outputs that will affect Facebook’s future News Feed filters, all to build the business of Facebook. As research, it is an over-determined and completely constructed object of study, and its outputs are not generalizable.

Ultimately, Facebook has only learned something about Facebook.

The Wide World of Corporate Applied Science

For-profit companies have long conducted applied science research. But the reaction to this study seems to suggest there is something materially different in the way we perceive commercial data science research’s impacts. Why is that?

At GE or Boeing, two long-time applied science leaders, the incentives for research scientists are the same as they are for those at Facebook. Employee-scientists at all three companies hope to produce research that directly informs product development and leads to revenue. However, the outcomes of their research are very different. When Boeing does research, it contributes to humanity’s ability to fly. When Facebook does research, it serves its own ideological agenda and perpetuates Facebooky-ness.

Facebook is now more forthright about this. In a response to the recent controversy, Facebook data scientist Adam Kramer wrote, “The goal of all of our research at Facebook is to learn how to provide a better service…We were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.”

Facebook’s former head of data science Cameron Marlow offers, “Our goal is not to change the pattern of communication in society. Our goal is to understand it so we can adapt our platform to give people the experience that they want.”

But data scientists don’t just produce knowledge about observable, naturally occurring phenomena; they shape outcomes. A/B testing and routinized experimentation in real time are done on just about every major website in order to optimize for certain desired behaviors and interactions. Google designers infamously tested up to 40 shades of blue. Facebook has already experimented with the effects of social pressure in getting-out-the-vote, raising concerns about selective digital gerrymandering. What might Facebook do with its version of this research? Perhaps it could design the News Feed to show us positive posts from our friends in order to make us happier and encourage us to spend more time on the site? Or might Facebook show us more sad posts, encouraging us to spend more time on the site because we have more to complain about?

Should we think of commercial data science as science? When we conflate the two, we assume companies are accountable for producing generalizable knowledge and we risk according their findings undue weight and authority. Yet when we don’t, we risk absolving practitioners from the rigor and ethical review that grants authority and power to scientific knowledge.

Facebook has published a paper in an attempt to contribute to the larger body of social science knowledge. But researchers today cannot possibly replicate Facebook’s experiment without Facebook’s cooperation. The worst outcome of this debacle would be for Facebook to retreat and avoid further public relations fiascos by keeping all its data science research findings internal. Instead, if companies like Facebook, Google, and Twitter are to support an open stance toward contributing knowledge, we need researchers with non-commercial interests who can run and replicate this research outside of the platform’s influence.

Facebook sees its users not as a population of human subjects, but as a consumer public. Therefore, we—that public and those subjects—must ask the bigger questions. What are the claims that data science makes both in industry and academia? What do they say about the kinds of knowledge that our society values?

We need to be more critical of the production of data science, especially in commercial settings. The firms that use our data have asymmetric power over us. We do them a favor unquestioningly accepting their claims to the prestige, expertise, and authority of science as well.

Ultimately, society’s greatest concerns with science and technology are ethical: Do we accept or reject the means by which knowledge is produced and the ends to which it is applied? It’s a question we ask of nuclear physics, genetic modification—and one we should ask of data science.

By Adam Frank

June 11, 2013 2:41 PM ET
Big Data may not be much to look at, but it can be powerful stuff. For instance, this is what the new National Security Agency (NSA) data center in Bluffdale, Utah, looks like.

Big Data may not be much to look at, but it can be powerful stuff. For instance, this is what the new National Security Agency (NSA) data center in Bluffdale, Utah, looks like.

George Frey/Getty Images

New technologies are not all equal. Some do nothing more than add a thin extra layer to the top-soil of human behavior (i.e., Teflon and the invention of non-stick frying pans). Some technologies, however, dig deeper, uprooting the norms of human behavior and replacing them with wholly new possibilities. For the last few months I have been arguing that Big Data — the machine-based collection and analysis of astronomical quantities of information — represents such a turn. And, for the most part, I have painted this transformation in a positive light. But last week’s revelations about the NSA’s PRISM program have put the potential dangers of Big Data front and center. So, let’s take a peek at Big Data’s dark side.

The central premise of Big Data is that all the digital breadcrumbs we leave behind as we go about our everyday lives create a trail of behavior that can be followed, captured, stored and “mined” en-mass, providing the miners with fundamental insights into both our personal and collective behavior.

The initial “ick” factor from Big Data is the loss of privacy, as pretty much every aspect of your life (location records via mobile phones, purchases via credit cards, interests via web-surfing behavior) has been recorded — and, possibly, shared — by some entity somewhere. Big Data moves from “ick” to potentially harmful when all of those breadcrumbs are thrown in a machine for processing.

This is the “data-mining” part of Big Data and it happens when algorithms are used to search for statistical correlations between one kind of behavior and another. This is where things can get really tricky and really scary.

Consider, for example, the age-old activity of securing a loan. Back in the day you went to a bank and they looked at your application, the market and your credit history. Then they said “yes” or “no.” End of story. In the world of Big Data, banks now have more ways to assess your credit worthiness.

“We feel like all data is credit data,” former Google CIO Douglas Merrill said last year in The New York Times. “We just don’t know how to use it yet.” Merrill is CEO of ZestCash, one of a host of start-up companies using information from sources such as social networks to determine the probability that an applicant will repay their loan.

Your contacts on LinkedIn can be used to assess your “character and capacity” when it comes to loans. Facebook friends can also be useful. Have rich friends? That’s good. Know some deadbeats, not so much. Companies will argue they are only trying to sort out the good applicants from the bad. But there is also a real risk that you will be unfairly swept into an algorithm’s dead zone and disqualified from a loan, with devastating consequences for your life.

Jay Stanley of the ACLU says being judged based on the actions of others is not limited to your social networks:

Credit card companies sometimes lower a customer’s credit limitbased on the repayment history of the other customers of stores where a person shops. Such “behavioral scoring” is a form of economic guilt-by-association based on making statistical inferences about a person that go far beyond anything that person can control or be aware of.

The link between behavior, health and health insurance is another gray (or dark) area for Big Data. Consider the case of Walter and Paula Shelton of Gilbert, Louisiana. Back in 2008, Business Weekreported how the Sheltons were denied health insurance when records of their prescription drug purchases were pulled. Even though their blood pressure and anti-depression medications were for relatively minor conditions, the Sheltons had fallen into another algorithmic dead zone in which certain kinds of purchases trigger red flags that lead to denial of coverage.

Since 2008 the use of Big Data by the insurance industry has only become more entrenched. As The Wall Street Journal reports:

Companies also have started scrutinizing employees’ other behavior more discreetly. Blue Cross and Blue Shield of North Carolina recently began buying spending data on more than 3 million people in its employer group plans. If someone, say, purchases plus-size clothing, the health plan could flag him for potential obesity—and then call or send mailings offering weight-loss solutions.

Of course no one will argue with helping folks get healthier. But with insurance costs dominating company spreadsheets, it’s not hard to imagine how that data about plus-size purchases might someday factor into employment decisions.

And then there’s the government’s use, or misuse, of Big Data. For years critics have pointed to no-fly lists as an example of where Big Data can go wrong.

No-fly lists are meant to keep people who might be terrorists off of planes. It has long been assumed that data harvesting and mining are part of the process for determining who is on a no-fly list. So far, so good.

But the stories of folks unfairly listed are manifold: everything from disabled Marine Corps veterans to (at one point) the late Sen. Ted Kennedy. Because the methods used in placing people on the list are secret, getting off the list can, according to Connor Freidersdorf of The Atlantic, be a Kafka-esque exercise in frustration.

A 2008 National Academy of Sciences report exploring the use of Big Data techniques for national security made the dangers explicit:

The rich digital record that is made of people’s lives today provides many benefits to most people in the course of everyday life. Such data may also have utility for counterterrorist and law enforcement efforts. However, the use of such data for these purposes also raises concerns about the protection of privacy and civil liberties. Improperly used, programs that do not explicitly protect the rights of innocent individuals are likely to create second-class citizens whose freedoms to travel, engage in commercial transactions, communicate, and practice certain trades will be curtailed—and under some circumstances, they could even be improperly jailed.

So where do we go from here?

From credit to health insurance to national security, the technologies of Big Data raise real concerns about far more than just privacy (though those privacy concerns are real, legitimate and pretty scary). The debate opening up before us is an essential one for a culture dominated by science and technology.

Who decides how we go forward? Who determines if a technology is adopted? Who determines when and how it will be deployed? Who has the rights to your data? Who speaks for us? How do we speak for ourselves?

These are the Big Questions that Big Data is forcing us to confront.

Andrew Carnegie arrived in the U.S. in 1848 with barely a dollar to his name. By 1901, he was the richest man in the world.

At the height of his power, he was approached by a young journalist named Napoleon Hill who was interested in telling the stories of successful people.

Carnegie saw a special drive in Hill and in 1908 decided that Hill would document all of the strategies that made him a legendary businessman and philanthropist.

Together, they helped pioneer the self-help genre, and Hill’s 1937 book “Think and Grow Rich” has gone on to become one of the top-selling books of all time.

When Hill began his career writing about success, Carnegie gave him his “10 Rules of Success” that provided a foundation for much of Hill’s work. Here’s a synopsis of the rules, which appear in the forthcoming collection “The Science of Success”:

1. Define your purpose.

Create a plan of action and start working toward it immediately.

2. Create a “master-mind alliance.”

Contact and work with people “who have what you haven’t,” Hill says.

3. Go the extra mile.

“Doing more than you have to do is the only thing that justifies raises or promotions, and puts people under an obligation to you,” writes Hill.

4. Practice “applied faith.”

Believe in yourself and your purpose so fully that you act with complete confidence.

5. Have personal initiative.

Do what you have to without being told.

6. Indulge your imagination.

Dare to think beyond what’s already been done.

7. Exert enthusiasm.

A positive attitude sets you up for success and wins the respect of others.

8. Think accurately.

In Hill’s words, accurate thinking is “the ability to separate facts from fiction and to use those pertinent to your own concerns or problems.”

9. Concentrate your effort.

Don’t become distracted from the most important task you are currently facing.

10. Profit from adversity.

Remember that “there is an equivalent benefit for every setback,” Hill writes.

Read more: http://www.businessinsider.com/andrew-carnegies-rules-of-success-2014-5#ixzz33k5Yn2eF

Mary Carmichael is a FRONTLINE web associate producer.

For an ad campaign that started a revolution in marketing, the Pepsi Challenge TV spots of the 1970s and ’80s were almost absurdly simple. Little more than a series of blind taste tests, these ads showed people being asked to choose between Pepsi and Coke without knowing which one they were consuming. Not surprisingly, given the sponsor, Pepsi was usually the winner.

But 30 years after the commercials debuted, neuroscientist Read Montague was still thinking about them. Something didn’t make sense. If people preferred the taste of Pepsi, the drink should have dominated the market. It didn’t. So in the summer of 2003, Montague gave himself a ‘Pepsi Challenge’ of a different sort: to figure out why people would buy a product they didn’t particularly like.

What he found was the first data from an entirely new field: neuromarketing, the study of the brain’s responses to ads, brands, and the rest of the messages littering the cultural landscape. Montague had his subjects take the Pepsi Challenge while he watched their neural activity with a functional MRI machine, which tracks blood flow to different regions of the brain. Without knowing what they were drinking, about half of them said they preferred Pepsi. But once Montague told them which samples were Coke, three-fourths said that drink tasted better, and their brain activity changed too. Coke “lit up” the medial prefrontal cortex — a part of the brain that controls higher thinking. Montague’s hunch was that the brain was recalling images and ideas from commercials, and the brand was overriding the actual quality of the product. For years, in the face of failed brands and laughably bad ad campaigns, marketers had argued that they could influence consumers’ choices. Now, there appeared to be solid neurological proof. Montague published his findings in the October 2004 issue of Neuron, and a cottage industry was born.

Neuromarketing, in one form or another, is now one of the hottest new tools of its trade. At the most basic levels, companies are starting to sift through the piles of psychological literature that have been steadily growing since the 1990s’ boom in brain-imaging technology. Surprisingly few businesses have kept tabs on the studies – until now. “Most marketers don’t take a single class in psychology. A lot of the current communications projects we see are based on research from the ’70s,” says Justine Meaux, a scientist at Atlanta’s BrightHouse Neurostrategies Group, one of the first and largest neurosciences consulting firms. “Especially in these early years, it’s about teaching people the basics. What we end up doing is educating people about some false assumptions about how the brain works.”

Getting an update on research is one thing; for decades, marketers have relied on behavioral studies for guidance. But some companies are taking the practice several steps further, commissioning their own fMRI studies à la Montague’s test. In a study of men’s reactions to cars, Daimler-Chrysler has found that sportier models activate the brain’s reward centers — the same areas that light up in response to alcohol and drugs — as well as activating the area in the brain that recognizes faces, which may explain people’s tendency to anthropomorphize their cars. Steven Quartz, a scientist at Stanford University, is currently conducting similar research on movie trailers. And in the age of poll-taking and smear campaigns, political advertising is also getting in on the game. Researchers at the University of California, Los Angeles have found that Republicans and Democrats react differently to campaign ads showing images of the Sept. 11th terrorist attacks. Those ads cause the part of the brain associated with fear to light up more vividly in Democrats than in Republicans.

That last piece of research is particularly worrisome to anti-marketing activists, some of whom are already mobilizing against the nascent field of neuromarketing. Gary Ruskin of Commercial Alert, a non-profit that argues for strict regulations on advertising, says that “a year ago almost nobody had heard of neuromarketing except for Forbes readers.” Now, he says, it’s everywhere, and over the past year he has waged a campaign against the practice, lobbying Congress and the American Psychological Association (APA) and threatening lawsuits against BrightHouse and other practitioners. Even though he admits the research is still “in the very preliminary stages,” he says it could eventually lead to complete corporate manipulation of consumers — or citizens, with governments using brain scans to create more effective propaganda.

Ruskin might be consoled by the fact that many neuromarketers still don’t know how to apply their findings. Increased activity in the brain doesn’t necessarily mean increased preference for a product. And, says Meaux, no amount of neuromarketing research can transform otherwise rational people into consumption-driven zombies. “Of course we’re all influenced by the messages around us,” she says. “That doesn’t take away free choice.” As for Ruskin, she says tersely, “there is no grounds for what he is accusing.” So far, the regulatory boards agree with her: the government has decided not to investigate BrightHouse and the APA’s most recent ethics statement said nothing about neuromarketing. Says Ruskin: “It was a total defeat for us.”

With Commercial Alert’s campaign thwarted for now, BrightHouse is moving forward. In January, the company plans to start publishing a neuroscience newsletter aimed at businesses. And although it “doesn’t conduct fMRI studies except in the rarest of cases,” it is getting ready to publish the results of a particularly tantalizing set of tests. While neuroscientist Montague’s ‘Pepsi Challenge’ suggests that branding appears to make a difference in consumer preference, BrightHouse’s research promises to show exactly how much emotional impact that branding can have. Marketers have long known that some brands have a seemingly magic appeal; they can elicit strong devotion, with buyers saying they identify with the brand as an extension of their personalities. The BrightHouse research is expected to show exactly which products those are. “This is really just the first step,” says Meaux, who points out that no one has discovered a “buy button” in the brain. But with more and more companies peering into the minds of their consumers, could that be far off?

By Bill Chappell

Three popular pesticides will soon be illegal in the European Union, where officials hope the change helps restore populations of honey bees, vital to crop production, to healthy levels. The new ban will be enacted in December.

“I pledge to do my utmost to ensure that our bees, which are so vital to our ecosystem and contribute over €22 billion ($28.8 billion) annually to European agriculture, are protected,” said EU Health and Consumer Commissioner Tonio Borg.

Two European producers of the banned pesticides, Bayer of Germany and Sygenta of Switzerland, have said their products aren’t to blame for the bees’ decline. Called neonicotinoids, the pesticides will no longer be approved for use in European crops that include corn, rapeseed, and cotton.

Earlier this year, a European Food Safety Authority report found that the pesticides — clothianidin, imidacloprid and thiametoxam — presented a risk to bees when they are exposed to the dust, pollen, or nectar of some treated crops.

In the U.S., a group of environmentalists and beekeepers have sued the Environmental Protection Agency to stop the use of two of the pesticides, as NPR’s Dan Charles recently reported.

The pesticides are “used to coat the seeds of many agricultural crops, including the biggest crop of all: corn,” Dan reported. “Neonics, as they’re called, protect those crops from insect pests.”

Critics of the pesticides say that while small doses of the chemicals may not be immediately toxic to bees, they disrupt the bees’ ability to work with their colonies, eventually leading to weakened hives that can’t sustain themselves — or pollinate plants.

“However, pesticide manufacturers and some scientists say no link has been proven between the use of neonicotinoids and a sharp decline in bee numbers in Europe in recent years,” Reuters reports, “a phenomenon known as “colony collapse disorder.”

When the European Union’s member states voted on the issue, a qualified majority could not be reached, with 15 of the union’s 27 member states voting in favor. But its executive European Commission decided to move ahead with the ban, and to review its effects within two years.

Follow

Get every new post delivered to your Inbox.

Join 6,027 other followers