Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

Reporting Made Easier in Messenger

By Hadi Michel, Product Manager, Messenger

Most people use Messenger to connect with family and friends, make plans, and share photos and videos with loved ones. To help minimize bad experiences that can get in the way of these connections, we’re introducing new tools on mobile for people to report conversations that violate our Community Standards. Previously, people were only able to report activity in Messenger via the Facebook reporting tools or Messenger web.

You can now access the reporting tool directly from any Messenger conversation on iOS or Android by:

1) Tapping the name of the person or group with whom you’re having a conversation.
2) Scroll to Something’s Wrong
3) Select from several categories such as harassment, hate speech, or pretending to be someone else.

You can also choose to ignore or block the person you are reporting. After completing your report, you’ll receive a confirmation that it was successfully submitted for review.

Providing more granular reporting options in Messenger makes it faster and easier to report things for our Community Operations team to review. They review reports in over 50 languages. This means our community will see issues addressed faster so they can continue to have positive experiences on Messenger.

We encourage people to use our reporting tools. The more you do, the more you assist us in keeping the Messenger community safe.

An Update on Our App Investigation and Audit

By Ime Archibong, VP of Product Partnerships

Here is an update on the app investigation and audit that Mark Zuckerberg promised on March 21.

As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014 — significantly reducing the data apps could access. He also made clear that where we had concerns about individual apps we would audit them — and any app that either refused or failed an audit would be banned from Facebook.

The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests for information (RFI) — which ask a series of detailed questions about the app and the data it has access to — and perform audits that may include on-site inspections.

We have large teams of internal and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended — pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015 — just as we did for Cambridge Analytica.

There is a lot more work to be done to find all the apps that may have misused people’s Facebook data – and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We will keep you updated on our progress.

Russian Ads Released by Congress

Today, House Intelligence Committee Ranking Member Adam Schiff published the 3,000 ads the Russia-based Internet Research Agency ran on Facebook and Instagram between 2015 and 2017. We gave these ads to Congress so they could better understand the extent of Russian interference in the last US presidential election.

In the run-up to the 2016 elections, we were focused on the kinds of cybersecurity attacks typically used by nation states, for example phishing and malware attacks. And we were too slow to spot this type of information operations interference. Since then, we’ve made important changes to prevent bad actors from using misinformation to undermine the democratic process.

This will never be a solved problem because we’re up against determined, creative and well-funded adversaries. But we are making steady progress. Here is a list of the 10 most important changes we have made:

1. Ads transparency. Advertising should be transparent: you should be able to see all the ads an advertiser is currently running on Facebook, Instagram and Messenger. And for issue and political ads, we’re creating an archive so you can search back seven years — including for information about ad impressions and spend, as well as demographic data such as age, gender and location. People in Canada and Ireland can already see all the ads that a Page is running on Facebook — and we’re launching this globally in June.

2. Verification and labeling. Every advertiser will now need confirm their ID and location before being able to run any political or issue ads in the US. All political and issue ads will also clearly state who paid for them.

3. Updating targeting. We want ads on Facebook to be safe and civil. We thoroughly review the targeting criteria advertisers can use to ensure they are consistent with our principles. As a result, we removed nearly one-third of the targeting segments used by the IRA. We continue to allow some criteria that people may find controversial. But we do see businesses marketing things like historical books, documentaries or television shows using them in legitimate ways.

4. Better technology. Over the past year, we’ve gotten increasingly better at finding and disabling fake accounts. We now block millions of fake accounts each day as people try to create them — and before they’ve done any harm. This is thanks to improvements in machine learning and artificial intelligence, which can proactively identify suspicious behavior at a scale that was not possible before — without needing to look at the content itself.

5. Action to tackle fake news. A key focus is working to disrupt the economics of fake news. For example, preventing the creation of fake accounts that spread it, banning sites that engage in this behavior from using our ad products, and demoting articles found to be false by fact checkers in News Feed — causing it to lose 80% of its traffic. We now work with independent fact checkers in the US, France, Germany, Ireland, the Netherlands, Italy, Mexico, Colombia, India, Indonesia and the Philippines with plans to scale to more countries in the coming months.

6. Significant investments in security. We’re doubling the number of people working on safety and security from 10,000 last year to over 20,000 this year. We expect these investments to impact our profitability. But the safety of people using Facebook needs to come before profit.

7. Industry collaboration. Last month, we joined 34 global tech and security companies in signing a TechAccord pact to help improve security for everyone.

8. Intelligence sharing with government. In the 2017 German elections, we worked closely with the authorities there, including the Federal Office for Information Security (BSI). This gave them a dedicated reporting channel for security issues related to the federal elections.

9. Tracking 40+ elections. In recent months, we’ve started to deploy new tools and teams to proactively identify threats in the run-up to specific elections. We first tested this effort during the Alabama Senate election, and plan to continue these efforts for elections around the globe, including the US midterms. Last year we used public service announcements to help inform people about fake news in 21 separate countries, including in advance of French, Kenyan and German elections.

10. Action against the Russia-based IRA. In April, we removed 70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — controlled by the IRA targeted at people living in Russia or Russian-speakers in Azerbaijan, Uzbekistan and Ukraine. The IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people in the US, Europe and Russia — and we don’t want them on Facebook anywhere in the world.

As our CEO and Founder, Mark Zuckerberg told Congress last month we need to take a broader view of our responsibilities as a company. That means not just building products that help people connect — but also ensuring that they are used for good and not abused. We still have a long way to go. And will keep you updated on our progress.

Jeff Zients Joins Facebook Board of Directors

Facebook today announced that Jeff Zients, the CEO of Cranemere, has been appointed to the company’s board of directors and audit committee, effective May 31, 2018, immediately following Facebook’s annual meeting of stockholders. Following Zients’s appointment the board will consist of seven independent, non-employee directors out of nine total directors.

“I am proud to join the Facebook Board and I look forward to working with Mark and the other directors as the company builds for the future. This is an exciting time for the company, and I am delighted to be part of the Board as the company works to face the opportunities and challenges of trying to bring the world closer together,” said Zients.

Zients currently serves as the CEO of the Cranemere Group Limited, a diversified holding company. From March 2014 to January 2017, Zients served as Director of the National Economic Council for President Obama and served as Acting Director of the Office of Management and Budget from January 2012 to April 2013. He also founded and managed Portfolio Logic LLC, an investment firm, from 2003 to 2009. From 1992 to 2004, Zients served in various roles at the Advisory Board Company, a research and consulting firm, including as Chairman from 2001 to 2004 and Chief Executive Officer from 1998 to 2000. He also served as Chairman of the Corporate Executive Board, a business research firm, from 2000 to 2001. Zients holds a B.A. in political science from Duke University.

Zients serves on the board of Cranemere Group Limited and Timbuk2 Design. He is also a director of the Biden Cancer Initiative.

Facebook also announced today the following changes to the membership of its board committees: in addition to Zients, Kenneth I. Chenault will join the Audit Committee. Susan D. Desmond-Hellmann will move from the Audit Committee to the Compensation & Governance Committee, and Marc L. Andreessen will remain on the Audit Committee but step off the Compensation & Governance Committee.

Aside from Zients, Facebook’s current board members are: Mark Zuckerberg; Marc L. Andreessen, Andreessen Horowitz; Erskine B. Bowles, President Emeritus, University of North Carolina; Kenneth I. Chenault, Chairman and Managing Director, General Catalyst; Susan D. Desmond-Hellmann, CEO, Bill and Melinda Gates Foundation; Reed Hastings, Chairman and CEO, Netflix; Jan Koum, Founder, WhatsApp; Sheryl K. Sandberg, Chief Operating Officer, Facebook; and Peter A. Thiel, Founders Fund.

 

Moms Help Local Communities Grow

By Antigone Davis, Facebook’s Global Head of Safety

Every year people celebrate the mothers in their lives on Facebook and Messenger with memories and notes of love and gratitude. In fact, in 2017, Mother’s Day conversations drove more posts in one day than any other topic on Facebook and was one of the biggest days on Messenger.

New survey data also suggests that as the parenthood journey progresses, parents are more likely to say they use Facebook daily as a resource. Using Groups and Fundraisers, connecting to buy and sell locally through Marketplace, and growing their small businesses with Pages, mothers are making a positive impact on their communities by collaborating and supporting one another:

  • Nzinga Jones is a leader of the Breastfeeding Support Group for Black Moms, a 45,000-member (and growing) community dedicated to empowering and supporting a community of Black women about the benefits of breastfeeding.
  • Ali Maffucci is the founder of the culinary brand Inspiralized, a resource for cooking creatively, healthfully and deliciously with the spiralizer, the kitchen tool that turns vegetables and fruits into noodles. Ali has built a strong community on Facebook with groups like Inspiralized Kids and Inspiralized Mamas, where members can engage, share recipes and talk about cooking with their families.
  • Diana Blinkhorn is a mother of three and the voice behind The Gray Ruby Diaries, a blog about all things motherhood. She’s writes about her experiences raising three young girls to help encourage other mothers during difficult times. Diana has built a community on Facebook of more than 10,000 people, and she uses Marketplace to sell children’s items in her local area.

Just in time for Mother’s Day, Facebook, Messenger Kids and Messenger have created dozens of ways for people to celebrate and share their gratitude for the moms in their lives:

Wish Mom a Happy Mother’s Day: On Mother’s Day you may see a message at the top of News Feed wishing everyone a Happy Mother’s Day. You can swipe to see ways to show your mom or someone in your life how grateful you are for everything they do. You can chose between cards, photo frames, or post a message using Camera or text post features.

Stay Connected with Mom: Messenger makes it easy for people to show their gratitude whether they are celebrating in person or sending love from afar. Check out the frames and stickers in the Messenger Camera to help you celebrate and share your appreciation, or have the whole family join the fun on a group video chat.

Never Miss a Moment with Messenger Kids: For Mother’s Day, kids can decorate their photos with new stickers to tell mom she’s the best, try out the Mother’s Day frame, or play with the special mask in the Messenger Kids camera.

 

 

 

 

 

 

 

 

 

 

This Mother’s Day, share some gratitude with moms and mother-figures who not only support their families and friends, but also help their local communities grow.

Hard Questions: Why Does Facebook Enable End-to-End Encryption?

Hard Questions is a series from Facebook that addresses the impact of our products on society.

By Gail Kent, Global Public Policy Lead on Security

End-to-end encryption is a powerful tool for security and safety. It lets patients talk to their doctors in complete confidence. It helps journalists communicate with sources without governments listening in. It gives citizens in repressive regimes a lifeline to human rights advocates. And by end-to-end encrypting sensitive information, a cyber attack aimed at revealing private conversations would be far less likely to succeed. But like most technologies, it also has drawbacks: it can make it harder for companies to catch bad actors abusing their services or for law enforcement to investigate some crimes.

I joined Facebook after two decades with the British National Crime Agency working on international investigations. My job was to work with law enforcement agencies around the world — including Interpol and Europol — to study how criminals communicate with each other.

We used encryption on a daily basis. It made it possible to communicate securely within our own organization as well as other agencies and sources in the field. But it could also create challenges in obtaining evidence. So I have experienced the trade-offs of encryption first hand. Yet I feel strongly that society is better off with it.

How It Works

End-to-end encryption is used in all WhatsApp conversations and can be opted into in Messenger. End-to-end encrypted messages are secured with a lock, and only the sender and recipient have the special key needed to unlock and read them. For added protection, every message you send has its own unique lock and key. No one can intercept the communications.

From my law enforcement days, I understand the frustration of this technology, especially when a threat may be imminent. And now that I’m at Facebook, which owns WhatsApp, I hear from government officials who question why we continue to enable end-to-end encryption when we know it’s being used by bad people to do bad things. That’s a fair question. But there would be a clear trade-off without it: it would remove an important layer of security for the hundreds of millions of law-abiding people that rely on end-to-end encryption. In addition, changing our encryption practices would not stop bad actors from using end-to-end encryption since other, less responsible services are available.

While some officials publicly acknowledge the benefits of end-to-end encryption, they simultaneously push for work-arounds that would allow them access to at least some information. A report by the Electronic Frontier Foundation earlier this year identified an effort, likely by a foreign nation, to trick people into installing spoof versions of messaging apps for intelligence purposes. And proponents of so-called “backdoors” imagine a hidden way of bypassing encryption, somehow accessing only the conversations of suspected criminals or terrorists while continuing to protect everyone else.

But cybersecurity experts have repeatedly proven that it’s impossible to create any backdoor that couldn’t be discovered — and exploited — by bad actors. It’s why weakening any part of encryption weakens the whole security ecosystem. And we rely on open source encryption protocols, encouraging people — and governments — to test the security of our systems. This constant auditing is another reason why decrypting certain conversations on behalf of governments, even if legal under local law, would not go unnoticed.

Working With Governments to Keep People Safe

My work involves working with government and law enforcement agencies to help keep people safe.

While we can’t access encrypted conversations, WhatsApp does have some limited personal information about users that they collect in order to provide their service. WhatsApp has shared these details to help law enforcement when we get valid legal requests to help them close in on a suspect. To help them understand, WhatsApp has hosted a number of training sessions around the world, including in Europe and Brazil, for the police, judges and others. And we plan to host more sessions in the coming months.

WhatsApp’s response to an emergency request from law enforcement in Brazil helped rescue a kidnapping victim — and in Indonesia, it helped law enforcement prosecute a group spreading child exploitive imagery.

When it comes to encryption services, we know that working with governments can be controversial. But we believe it’s part of our broad responsibility to the communities we serve, so long as it’s consistent with the law and does not undermine the security of our products. Twice a year, we release a Transparency Report laying out every government request we get across Facebook, WhatsApp, Instagram and Messenger. We scrutinize each request for legal sufficiency and challenge those that are deficient or overly broad.

Looking Ahead

We’re constantly working to make sure that people understand how they can control their privacy and security. This means explaining both the strengths and limitations of end-to-end encryption so people can make the choices best for them.

For example, if someone gains access to your device they will be able to see your messages. End-to-end encryption does not provide protection should you decide to download a chat to your computer or back them up to a cloud provider. Businesses you communicate with also may use other companies to store, read or respond to messages. Some technologies are better than others in supporting your privacy in these scenarios.

The debate around end-to-end encryption won’t and shouldn’t end anytime soon. People need secure ways to communicate and strong safeguards against every day threats. We believe both of these goals can be achieved, and that end-to-end encryption need not be compromised in the process.

Facebook Community Boost Celebrates Small Business Week Confirms Dates and Announces 13 New US Cities

By Doug Frisbie, Global Marketing Director, Small Business

As our CEO Mark Zuckerberg has said, we need to make it easier for people to grow businesses or find jobs. The 80 million small businesses on Facebook represent one of the largest communities of small businesses in the world. And we are building more technology and new programs based on their feedback to help them grow, trade, and hire.

Today, in celebration of Small Business Week we are delighted to announce dates and new cities for Facebook events:

New dates for previously announced Facebook Community Boost cities:

  • Denver, CO — June 18-19
  • Hampton, VA — June 26-28
  • Phoenix, AZ — July 9-10
  • Buffalo, NY — July 9-11
  • Minneapolis, MN — July 18-19
  • Helena, MT — July 24-25
  • Columbus, OH — August 1-3
  • Menlo Park / East Palo Alto — August 27-30

New Facebook Community Boost Cities

  • San Diego, CA — August 6-7
  • Pittsburgh, PA — August 9-10
  • Topeka, KS — September 5-6
  • Springfield, MA — September 10-11
  • Jackson, MS — September 18-19
  • Atlanta, GA — September 24-26
  • Omaha, NE — Oct 2-4
  • Edison, NJ — Oct 8-9

New Boost Your Business Cities

  • Santa Fe, NM — May 3
  • Effingham, IL — May 3
  • New York City — May 22
  • Norwalk, CT — May 23
  • Greater Shreveport area — June 18

Our team is committed to investing in small businesses and their communities and we’ll continue sharing what we learn along the way. We’d also love to hear your thoughts at facebook.com/communityboost, where you can stay up to date with the latest city announcement, news, and program announcements.

F8 2018: Open AI Frameworks, New AR/VR Advancements, and Other Highlights from Day 2

The second day of F8 focused on the long-term technology investments we are making in three areas: connectivity, AI, and AR/VR. Chief Technology Officer Mike Schroepfer kicked off the keynote, followed by Engineering Director Srinivas Narayanan, Research Scientist Isabel Kloumann, and Head of Core Tech Product Management Maria Fernandez Guajardo.

From advances in bringing connectivity to more people throughout the world to state-of-the-art research breakthroughs in AI to the development of entirely new experiences in AR/VR, Facebook continues to build new technologies that will bring people closer together and help keep them safe.

Artificial Intelligence 

We view AI as a foundational technology, and we’ve made deep investments in advancing the state of the art through scientist-directed research. Today at F8, our artificial intelligence research and engineering teams shared a recent breakthrough: the teams successfully trained an image recognition system on a data set of 3.5 billion publicly available photos, using the hashtags on those photos in place of human annotations. This new technique will allow our researchers to scale their work much more quickly, and they’ve already used it to score a record-high 85.4% accuracy on the widely used ImageNet benchmark. We’ve already been able to leverage this work in production to improve our ability to identify content that violates our policies.

Mike Schroepfer discusses our image recognition work at F8 2018

This image recognition work is powered by our AI research and production tools: PyTorch, Caffe2, and ONNX. Today, we announced the next version of our open source AI framework, PyTorch 1.0, which combines the capabilities of all these tools to provide everyone in the AI research community with a fast, seamless path for building a broad range of AI projects. The technology in PyTorch 1.0 is already being used at scale, including performing nearly 6 billion text translations per day for the 48 most commonly used languages on Facebook. In VR, these tools have helped in deploying new research into production to make avatars move more realistically.

The PyTorch 1.0 toolkit will be available in beta within the next few months, making Facebook’s state-of-the-art AI research tools available to everyone. With it, developers can take advantage of computer vision advances like DensePose, which can put a full polygonal mesh overlay on people as they move through a scene — something that will help make AR camera applications more compelling.

For a deeper dive on all of today’s AI updates and advancements, including our open source work on ELF OpenGo, check out the posts on our Engineering Blog or visit facebook.ai/developers, where you can get tools and code to build your own applications.

AR/VR 

Facebook’s advancements in AR and VR draw from an array of research areas to help us create better shared experiences, regardless of physical distance. From capturing realistic-looking surroundings to producing next-generation avatars, we’re closer to making AR/VR experiences feel like reality.

Our research scientists have created a prototype system that can generate 3D reconstructions of physical spaces with surprisingly convincing results. The video below shows a side-by-side comparison between normal footage and a 3D reconstruction. It’s hard to tell the difference. (Hint: Look for the camera operator’s foot, which appears only in the regular video.)

Realistic surroundings are important for creating more immersive AR/VR, but so are realistic avatars. Our teams have been working on state-of-the-art research to help computers generate photorealistic avatars, seen below.

Connectivity  

These advances in AI and AR/VR are relevant only if you have access to a strong internet connection — and there are currently 3.8 billion people around the world who don’t have internet access. To increase connectivity around the world, we’re focused on developing next-generation technologies that can help bring the cost of connectivity down to reach the unconnected and increase capacity and performance for everyone else. In Uganda, we partnered with local operators to bring new fiber to the region that, when completed, will provide backhaul connectivity covering more than 3 million people and enable future cross-border connectivity to neighboring countries. Meanwhile, Facebook and City of San Jose employees have begun testing an advanced Wi-Fi network supported by Terragraph. Trials of Terragraph are also planned for Hungary and Malaysia. We are also working with hundreds of partners in the Telecom Infra Project to build and launch a variety of innovative, efficient network infrastructure solutions. And, as with our work in AI and other areas, we are sharing what we learn about connectivity so that others can benefit from it.

Watch the full keynote here.

To read more about yesterday’s announcements, read our Day 1 Roundup. For more details on today’s news, see our Developer BlogEngineering BlogOculus BlogMessenger BlogInstagram Press Center and Newsroom. You can also watch all F8 keynotes on the Facebook for Developers Page.

 

Downloads:

F8 2018: Using Technology to Remove the Bad Stuff Before It’s Even Reported

By Guy Rosen, VP of Product Management

There are two ways to get bad content, like terrorist videos, hate speech, porn or violence off Facebook: take it down when someone flags it, or proactively find it using technology. Both are important. But advances in technology, including in artificial intelligence, machine learning and computer vision, mean that we can now:

  • Remove bad content faster because we don’t always have to wait for it to be reported. In the case of suicide this can mean the difference between life and death. Because as soon as our technology has identified that someone has expressed thoughts of suicide, we can reach out to offer help or work with first responders, which we’ve now done in over a thousand cases.
  • Get to more content, again because we don’t have to wait for someone else to find it. As we announced two weeks ago, in the first quarter of 2018, for example, we proactively removed almost two million pieces of ISIS and al-Qaeda content — 99% of which was taken down before anyone reported it to Facebook.
  • Increase the capacity of our review team to work on cases where human expertise is needed to understand the context or nuance of a particular situation. For instance, is someone talking about their own drug addiction, or encouraging others to take drugs?

It’s taken time to develop this software – and we’re constantly pushing to improve it. We do this by analyzing specific examples of bad content that have been reported and removed to identify patterns of behavior. These patterns can then be used to teach our software to proactively find other, similar problems.

  • Nudity and graphic violence: These are two very different types of content but we’re using improvements in computer vision to proactively remove both.
  • Hate speech: Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? We’ve started using technology to proactively detect something that might violate our policies, starting with certain languages such as English and Portuguese. Our teams then review the content so what’s OK stays up, for example someone describing hate they encountered to raise awareness of the problem.
  • Fake accounts: We block millions of fake accounts every day when they are created and before they can do any harm. This is incredibly important in fighting spam, fake news, misinformation and bad ads. Recently, we started using artificial intelligence to detect accounts linked to financial scams.
  • Spam: The vast majority of our work fighting spam is done automatically using recognizable patterns of problematic behavior. For example, if an account is posting over and over in quick succession that’s a strong sign something is wrong.
  • Terrorist propaganda: The vast majority of this content is removed automatically, without the need for someone to report it first.
  • Suicide prevention: As explained above, we proactively identify posts which might show that people are at risk so that they can get help.

When I talk about technology like artificial intelligence, computer vision or machine learning people often ask why we’re not making progress more quickly. And it’s a good question. Artificial intelligence, for example, is very promising but we are still years away from it being effective for all kinds of bad content because context is so important. That’s why we have people still reviewing reports.

And more generally, the technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. It’s why we can typically do more in English as it is the biggest data set we have on Facebook.

But we are investing in technology to increase our accuracy across new languages. For example, Facebook AI Research (FAIR) is working on an area called multi-lingual embeddings as a potential way to address the language challenge. And it’s why we sometimes may ask people for feedback if posts contain certain types of content, to encourage people to flag it for review. And it’s why reports that come from people who use Facebook are so important – so please keep them coming. Because by working together we can help make Facebook safer for everyone.

F8 2018: Oculus Go Available Now for $199

Today, we’re excited to announce that Oculus Go, the company’s first standalone headset, is now available at oculus.com in 23 countries. Starting at $199 USD, Oculus Go is the most affordable way to get into VR. The comfortable, lightweight device is launching with 1,000+ apps, games, and experiences.

VR has the ability to let us see things from new angles, expand our horizons, and improve our understanding of the world around us. From new social apps that let you attend live events from the best seat in the house to more intimate experiences that explore pressing social issues, Oculus Go will change the way you watch, play, and hang out with friends while offering you new perspectives.

Oculus Go was designed to provide a high-quality fit and feel. By combining the best lenses of any Oculus headset to date with built-in spatial audio and an optimized software stack, Oculus Go provides a compelling all-in-one immersive experience where content shines. Learn more on the Oculus blog or visit oculus.com/go.

Making it Easier for Organizations and Businesses to Help People during a Crisis

By Asha Sharma, Product Lead, Social Good

People come to Crisis Response on Facebook to let friends and family know they’re safe, learn and share more about what’s happening and help communities recover. Community Help was added a year ago to make it easier for people to ask for and give help during a crisis. But people helping people is only part of the solution. Organizations and businesses also play an integral role in responding to crises and helping communities rebuild.

Today we’re announcing that organizations and businesses can post in Community Help, so that they can provide critical information and services for people to get the help they need in a crisis.

We’re beginning to roll out the feature to Pages for organizations and businesses like Direct Relief, Lyft, Chase, Feeding America, International Medical Corps, The California Department of Forestry and Fire and Save the Children and will make the feature available to more in the coming weeks.

Enabling organizations and businesses to post in Community Help will give them a new way to reach communities impacted by crises. For example, they might post about helping people find everything from free transportation to supplies and connecting volunteers with organizations that need help.

Over the past year, people turned to Community Help for more than 500 different crises. Some of the crises where people used Community Help the most in 2017 include the flooding in Brazil (May), Hurricane Harvey in the US (August), the attack in Barcelona (August), the flooding in Mumbai (August) and the earthquake in Central Mexico (September).

People have also engaged with Community Help more than 750,000 times via posts, comments and messages, and the most frequent categories they use are volunteer opportunities, shelter, food and clothing donations.

Our priority is to build tools that help keep people safe and provide them with ways to get the help they need to recover and rebuild after a crisis. We hope this update makes it even easier for people to get the help they need in times of crisis and will give businesses and organizations an opportunity to build stronger communities around them.

Feeling the Love with Messenger

Messenger is a special place to connect and share with the people most important to you – whether making date night plans with your partner, arranging a girls night out with your closest friends, or jumping on a group video chat to catch up with your family.

In light of this, today we announced a fun and delightful feature just in time for Valentine’s Day. Starting tomorrow, if you newly indicate you’re in a romantic relationship on Facebook (aka make it “FB official”), you’ll get a Messenger notification that will open to your conversation with your loved one. From there…

  • It’s raining hearts! A heart shower will fall across your screen.
  • Spread the love: Your custom emoji (in the lower right hand corner) will be 😍, so expressing your love will be fast and easy
  • Get personal: You’ll be prompted to personalize your chat and set your own custom text color, emoji, and nickname in case you want to switch things up even more.
  • Chatting with bae: Your loved one will be the first person to appear on the Active tab, so you can easily see when they’re available to chat.

We can’t wait to hear what you think of this new feature. Valentine’s Day may be a perfect time to finally make it “FB official”!

In the spirit of feeling all the feelings, we thought it would be timely to share how people in the Messenger community express their love and adoration.

We found that emojis are the new love language: people share over 2 billion emojis every day on Messenger, with 😘, 😍, and ❤ranking in the top five most popular emojis.

Men and women actually express their love (via emojis) pretty similarly! For example, ❤ is the second most popular emoji for both men and women.

For those who like to personalize their chats, red is the most popular chat color and ❤is the most popular custom emoji.

And to no surprise, Messenger continues to be the place where people come to connect and share when they’re feeling the love. Valentine’s Day was one of the most popular and active days on Messenger last year – we’re excited to see if that holds true for this year!

If you’re planning to send someone a love note tomorrow, make it special this year with a variety of fun filters and effects in the Messenger Camera. Try out the heart eyes filter (open your mouth and you’ll see a fun animation), add some festive flair with a falling candy heart effect, or channel your inner royalty with the Queen of Hearts filter. The Messenger Camera is one tap or swipe away whether you’re already in a conversation or you’ve just opened the app.



All of these festive filters and effects are also available in Messenger video chat! You can call a loved one by starting or opening a one-on-one or group chat and tap the video icon in the top right corner. Then tap the star icon to access all of the fun filters and effects.

A New Investment in Community Leaders

By Jennifer Dulski, Head of Groups and Community, and Ime Archibong, Vice President, Product Partnerships

Today at the Facebook Communities Summit Europe, we announced the Facebook Community Leadership Program, a global initiative that invests in people building communities. Facebook will commit tens of millions of dollars to the program, including up to $10 million in grants that will go directly to people creating and leading communities.

In addition, we introduced new tools for group admins and the expansion of our London-based engineering team that builds technology to help keep people safe on Facebook.

More than 300 community leaders from across Europe came together today in London including Blind Veterans UK, an advocacy organization that provides practical and emotional support to blind veterans and their families; Donna Mamma, a support group for mothers in France to share advice and information; Girl Skate UK, which celebrates and brings together the female skateboarding community; High Society PL, a group of sneaker enthusiasts who bond over their shared passion; and Berlin Bruisers, Germany’s first gay and inclusive rugby club.

Facebook Community Leadership Program

Community leaders often tell us that with additional support they could have more impact. The Facebook Community Leadership Program is designed to empower leaders from around the world who are building communities through the Facebook family of apps and services. It includes:

  • Residency and Fellowship opportunities offer training, support and funding for community leaders from around the world.
    • Up to five leaders will be selected to be community leaders in residence and awarded up to $1,000,000* each to fund their proposals.
    • Up to 100 leaders will be selected for our fellowship program and will receive up to $50,000* each to be used for a specific community initiative.
  • Community Leadership Circles bring local community leaders together to meet up in person to connect, learn and collaborate. We piloted three circles in the US in 2017 and will be expanding globally this year.
  • Groups for Facebook Power Admins, which we currently run with more than 10,000 group admins in the US and UK, are expanding to more members to help them share advice with one another and connect with our team to test new features and share feedback.

Applications are now open for the residency and fellowship. To learn more and apply, visit communities.fb.com.

New Tools for Group Admins and Members

Group admins want to keep their communities safe, organized and engaged. Today we added four new features to support them.

  • Admin tools: Admins can now find member requests, Group Insights and more together in one place, making it easier to manage groups and freeing up more time for admins to connect with members.
  • Group announcements: Group admins want to be able to more easily share updates, so we’re introducing group announcements to let admins post up to 10 announcements that appear at the top of their group.
  • Group rules: Keeping communities safe is important. Now admins can create a dedicated rules section to help them effectively communicate the rules of the group to their members.
  • Personalization: Each community has its own identity — now admins can add a personalized color that is displayed throughout their group.

Expanding London Engineering Team for Community Safety

A team of engineers across the globe builds technologies that help keep our community safe and secure. London is home to Facebook’s largest engineering hub outside of the US, and by the end of 2018, we will double the number of people working in London on these issues.

Our engineering work on community safety includes the following:

  • Detecting and stopping fake accounts: Working to make sure Facebook is a community of people who can connect authentically with real people.
  • Protecting people from harm: Reducing things like harassment and scams that can happen in the real world and on Facebook, by building better tools to spot these issues and remove them.
  • Improving ways to report content: Making it easier for people to give us feedback about things that shouldn’t be on our platform, which works in conjunction with our automated detection.

People find meaning and support in community, online and in person. The programs and tools we announced today are designed to help the admins who lead these communities to grow and strengthen bonds among members. We are inspired by these leaders and look forward to continuing our efforts to support them.

*The final payment amount in USD may vary due to potential exchange rate fluctuation at the time of payment. At the time of announcement, $1,000,000 USD equates to approximately €810,000 Euros or £718,000 GBP; $50,000 USD equates to €40,500 Euros or £35,900 GBP.

 

Safer Internet Day: Teaching Children to Safely Engage Online and Supporting Parent Conversations

By Antigone Davis, Global Head of Safety

Every year on Safer Internet Day we recognize the importance of internet safety and the responsible use of online technology. This year, Safer Internet Day features a call to action focused on creating a better internet for everyone, including younger generations. As a company that reaches people around the world, we’re taking this call to action to heart.

Creating a better internet for kids starts with empowering parents. The fact that that parents see themselves as the best judges of how their kids should use technology helped guide our development of the Messenger Kids app. Parents control their kids’ accounts and contacts through the Messenger Kids Controls panel, creating a safer and more controlled environment for their kids to talk to trusted contacts.

As a mom and Facebook’s Global Head of Safety, I know how overwhelming it can be to raise a child in an increasingly digital world. So this year to mark Safer Internet Day, we want to help parents start a conversation with their children about technology and the choices they make when they go online.

Tips to Keep Kids Safe

We often hear that parents aren’t sure how to approach these topics with their kids. To make it easier, we’ve compiled some tips to jump-start the conversation.

  • Let your child know the same rules apply online as they do offline. Just as you’d tell your child to look both ways before crossing the street or to wear a helmet while riding their bike, teach them to think before they share online and how to use the security and safety tools available on apps and devices.
  • Be a good role model. The saying that children will “do as you do, not as you say” is as true online as it is offline. If you set time restrictions on when your child can use social media or be online, follow the same rules yourself.
  • Engage early and often. Data suggests that parents should be a part of what their children are doing online as soon as they start to participate. Consider adding them as a friend them when they create a social media account or an account on a messaging app, and have conversations with them often about what they’re doing and who they are talking when they go online.
  • Set the rules and know the tools. When your child gets their first tablet or phone and starts using apps, it’s a good time to set ground rules. It’s also a great time to take them through the basics of the tools available on the app. For instance, teach them how to report a piece of content and how to spot people who don’t have good intentions.
  • Ask your children to teach you. Children are often even more in touch with the newest apps and sites than adults, and they can be an excellent resource. The conversation can also serve as an opportunity to talk about issues of safety, privacy and security.

Listening to Parents

We recently conducted a survey of parents to get a fuller and clearer understanding of their attitudes toward technology. The survey found that:

  • 64% of parents trust themselves the most to guide their child’s technology use.*
  • 77% of parents say they are the most appropriate to determine how much time their child spends using online technologies.
  • 77% of parents say they are the most appropriate to decide the right age for their child to use online digital technologies.

When creating products for kids, we know we have to get it right. That means going beyond the basics of complying with child privacy laws. It’s why we we’ve been talking to thousands of parents and top experts in the fields of child development, online safety and children’s media. It’s also why we’re investing in further research about kids and technology.

We’ve committed resources to partner with independent academics on research studies about kids, tweens and teens and technology. Our goal is to better understand the connection between young people’s well-being and how they use digital technology. We will also convene conversations with stakeholders over the course of this year, beginning with our Global Safety Network Summit in Washington, D.C., this March.

Introducing Parent Conversations: A New Section of the Parents Portal

We want to provide parents with information to make the decisions that are best for their families.

Today we’re launching a new section of our Parents Portal where parents can find the latest information from child development experts, academics, thought leaders and people at Facebook about topics related to kids and technology. We’ll post videos and Q&As, as well as interactive polls so parents can express their voice in these important conversations. To visit Parent Conversations and find tips on keeping your kids safe online in today’s digital age, visit facebook.com/Safety/Parents/Conversations.

*In February 2018, Facebook conducted an unbranded survey with an online panel provider. The participants were a nationally representative sample of 275 US parents of 6th – 12th graders and 604 children aged 8-17.

This Friends Day, Show Gratitude for Your Closest Friends

By Mike Nowak, Product Director, Goodwill

Think through your closest friends. Is there someone who always has your back? Someone who never forgets your birthday? Or perhaps someone you consider your very best friend…aka your bestie?

February 4 is Friends Day, a day to show gratitude for all the important people in your life. To help you celebrate, you will see a message from Facebook at the top of News Feed wishing you a Happy Friends Day with a personalized Friends Awards video.

After the short video you can create and share your own Friends Awards — pre-made or by creating your own from a template, like “Bestie,” “Great Listener” and “Knows How to Make Me Laugh.”

For those who want to participate in other ways, there are three unique Camera filters for people to share.

Facebook will also celebrate Friends Day with a series of short films that highlight five remarkable friendships from around the world.

From the beginning, Facebook has been about connecting with your friends, which is why we’re inspired to see more than 750 million new friendships formed on Facebook each day. We hope everyone will take time this weekend to show a little gratitude.

Check out Friends Awards and all the Friends Day films at facebook.com/friendsday.

News Feed FYI: More Local News on Facebook

By Alex Hardiman, Head of News Product and Campbell Brown, Head of News Partnerships

People tell us they come to Facebook to connect with friends. They also say they want to see news about what’s happening in the world and their local community. This month, we’ve announced changes to prioritize posts from friends and high-quality news sources. Today, we’re updating News Feed to also prioritize local news so that you can see topics that have a direct impact on you and your community and discover what’s happening in your local area.

We identify local publishers as those whose links are clicked on by readers in a tight geographic area. If a story is from a publisher in your area, and you either follow the publisher’s Page or your friend shares a story from that outlet, it might show up higher in News Feed.

To start, this change is taking effect in the US, and we plan to expand to more countries this year. You can always choose which news sources, including local or national publications, that you want to see at the top of your feed with our See First feature.

What This Means for Publishers
As we announced earlier this month, we expect the amount of news in News Feed to go down as we focus on meaningful social interactions with family and friends over passive consumption. We are prioritizing local news as a part of our emphasis on high-quality news, and with today’s update, stories from local news publishers may appear higher in News Feed for followers in publishers’ geographic areas. This change is one of the many signals that go into News Feed ranking. For more, see our Publisher Guidelines.

There are no constraints on which publishers are eligible, which means large local publishers will benefit, as well as publishers that focus on niche topics like local sports, arts and human-interest stories. That said, small news outlets may benefit from this change more than other outlets, because they tend to have a concentrated readership in one location.

This is just the beginning of our efforts to prioritize high-quality news. This update may not capture all small or niche-interest publishers at first, but we are working to improve precision and coverage over time. All of our work to reduce false news, misinformation, clickbait, sensationalism and inauthentic accounts still applies.

Our Commitment to Local News
We’ve worked closely with local publishers through the Facebook Journalism Project over the last year, visiting newsrooms around the world to provide training and support for journalists, as well as building products that work for their publications and readers. Local news publishers participated in the majority of our collaborative product tests in 2017, including support for subscriptions in Instant Articles; call-to-action units, which are prompts for readers to like a publisher’s page or sign up for an email newsletter; and a new breaking news format in News Feed.

In addition to prioritizing local news, we are also testing a dedicated section on Facebook that connects people to news and information in their community, called Today In. We are testing this in six US cities and plan to expand in the coming months.

These efforts to prioritize quality news in News Feed, including this local initiative, are a direct result of the ongoing collaboration with partners. Our goal is to show more news that connects people to their local communities, and we look forward to improving and expanding these efforts this year.

Giving You More Control of Your Privacy on Facebook

By Erin Egan, Chief Privacy Officer

As part of Data Privacy Day, we’re introducing a new education campaign to help you understand how data is used on Facebook and how you can manage your own data. We’re also announcing plans to make your core privacy settings easier to find, and sharing our privacy principles for the first time. These principles guide our work at Facebook.

Helping You Take Control of Your Data on Facebook
You have many ways to control your data on Facebook. This includes tools to make sure you share only what you want with the people you want to see it. But privacy controls are only powerful if you know how to find and use them. Starting today we’re introducing educational videos in News Feed that help you get information on important privacy topics like how to control what information Facebook uses to show you ads, how to review and delete old posts, and even what it means to delete your account.

We’re also inviting people to take our Privacy Checkup and sharing privacy tips in education campaigns off Facebook, including ads on other websites. We’ll refresh our education campaigns throughout the year to give you tips on different topics.

Making Privacy Settings Easier to Find
We know how important it is for you to have clear, simple tools for managing your privacy. This year, we’ll introduce a new privacy center that features core privacy settings in a single place. We’re designing this based on feedback from people, policymakers and privacy experts around the world.

Facebook’s Privacy Principles
Our efforts to build data protection into our products and give you more information and control reflect core principles we’ve had on privacy. Today we’re sharing these principles for the first time here, and we’ve included them below.

We’re also developing resources that help other organizations build privacy into their services. For example, throughout 2018 we’re hosting workshops on data protection for small and medium businesses, beginning in Europe with a focus on the new General Data Protection Regulation. We hosted our first workshop in Brussels last week and published a guide for frequently asked questions. Around the world we’ll continue to host Design Jams that bring designers, developers, privacy experts and regulators together to create new ways of educating people on privacy and giving them control of their information.

We’ll keep improving our privacy tools and look forward to hearing what you think.

 

Facebook’s Privacy Principles

Facebook was built to bring people closer together. We help you connect with friends and family, discover local events and find groups to join. We recognize that people use Facebook to connect, but not everyone wants to share everything with everyone – including with us. It’s important that you have choices when it comes to how your data is used. These are the principles that guide how we approach privacy at Facebook.

We give you control of your privacy
You should be able to make the privacy choices that are right for you. We want to make sure you know where your privacy controls are and how to adjust them. For example, our audience selector tool lets you decide who you share with for every post. We develop controls based on feedback from around the world.

We help people understand how their data is used
While our Data Policy describes our practices in detail, we go beyond this to give you even more information. For example, we include education and tools in people’s day-to-day use of Facebook – like ad controls in the top right corner of every ad.

We design privacy into our products from the outset
We design privacy into Facebook products with guidance from experts in areas like data protection and privacy law, security, interface design, engineering, product management, and public policy. Our privacy team works to build these diverse perspectives into every stage of product development.

We work hard to keep your information secure
We work around the clock to help protect people’s accounts, and we build security into every Facebook product. Our security systems run millions of times per second to help catch threats automatically and remove them before they ever reach you. You can also use our security tools like two-factor authentication to help keep your account even more secure.

You own and can delete your information
You own the information you share on Facebook. This means you decide what you share and who you share it with on Facebook, and you can change your mind. That’s why we give you tools for deleting anything you’ve posted. We remove it from your timeline and from our servers. You can also delete your account whenever you want.

Improvement is constant
We’re constantly working to develop new controls and design them in ways that explain things to people clearly. We invest in research and work with experts beyond Facebook including designers, developers, privacy professionals and regulators.

We are accountable
In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies.

Guest Post: Is Social Media Good or Bad For Democracy?

By Toomas Hendrik Ilves, Distinguished Visiting Fellow, the Hoover Institution
This post is part of a series on social media and democracy.

For some two centuries, the electoral process has developed alongside technology – radio and television, the introduction of voting machines, mechanical and electronic – but never has the impact been so dramatically disruptive as in the past decade, with the arrival of hacking, doxing, “fake news,” social media and big data.

Liberal democracies and the political process in Germany, France, the UK the US, Spain and elsewhere have been subjected to a variety of distinct digital “attack vectors” in the past two years. Some are linked to Russia, others to one or another political party or group. These vectors include a range of disparate tactics, all under the misleading and overused general rubric, “election hacking.” Some but not all use social media; some aspects of social media manipulations remain murky and poorly understood, mainly because social media companies until autumn 2017 have been loath to reveal what they know.

Hacking, or breaking into servers and computers, goes back at least to the early 1970s. As a tool of espionage, it was inevitable that political parties, parliaments and candidates would eventually be hacked too. Now, however, a more pernicious technique known as “doxing,” or making private information public, has become part of the political process. First used on a wide scale by Wikileaks to publicize stolen US State Department cables, doxing has become a tool of political campaigns.

In the recent US and French elections, doxing was used by one group to embarrass the opposing side. Russian hackers breached both Republican and Democratic servers but only released information on the Democrats. In France, no emails from the Front National, the far-right French party were doxed. As a new twist, some of the doxed emails from Macron’s servers were clearly faked, planted there to cause even more damage.

These are new techniques, at least at this massive level of dissemination. Kompromat, the Russian term for compromising material, real or not, has been a staple of political action for centuries. Yet only with the advent of social media, has kompromat found widespread distribution and no less important, redistribution via social media shares.

Social acceptance of purloined correspondence is also changing. It is difficult to imagine that the media would have accepted or publicized physically stolen correspondence, had the Watergate break-in in 1972 been successful. As the 2016 US election showed, publishing purloined digital correspondence created no ethical dilemmas, even for the New York Times.

“Is social media therefore good or bad for democracy?” Too many factors are at play — and too little is known about their impact — to answer this question fairly. Certainly the effect on electoral democracy has been profound. Moreover, the effects may not be felt in democratic elections themselves but in how governments react to perceived threats, that is by imposing limits on free expression. It is imperative, however, that we explore the issue, with honesty and candor.

‘Fake News’ or Disinformation

Until the digital era, the primary problem with “fake news,” or as it was called then, “disinformation,” was its dissemination. Editors took care that published information was reliable, fearing both libel laws and the loss of their publication’s reputation. If something was patently false, ridiculous or unverifiable, the broader public never saw it.

The classic example of manufactured lies, the claim that the AIDs-causing HIV virus was developed by the CIA, took months to migrate to Europe from the story’s initial placement in a provincial Indian communist party paper. Even when the story eventually did reach the European press, it never gained traction, other than as an example of Soviet disinformation.

Today, it’s possible to create a fake news outlet, with a fake masthead in Gothic typeface, put it on Facebook, Vkontakte or Twitter and watch it take off. The public sees an article in something that looks like a news site. If they press the share button or retweet icon before detecting the fraud, it takes a fraction of a second before it’s off to friends and followers who may consider that share to be additional confirmation or approval.

In an ambitious 2016 study by BuzzFeed, which examined the consumption of fake news shared on Facebook in the three months before the US election, the data showed that the top performing fake news stories generated 8.7 million shares, reaction and comments. That compared to 7.3 million for the top stories produced by major news outlets and shared on Facebook.

We can add to this the Pew Study from last year which found that two-thirds of Americans rely on social media for at least some of their news, and a more recent Dartmouth study showing that 27.4% of voting age Americans visited a pro-Trump or pro-Clinton fake news site in the final weeks of the 2016 US election. We cannot say for certain that such sites altered anyone’s vote but we must admit that false news on social media is now fundamental input to voters’ decision-making.

The problem with drawing conclusions from these numbers is that it is extremely difficult to judge the actual impact of this massive disinformation effort. Research is relatively recent, concern over the issue is new. The studies have been inconclusive, although it is clear that false stories do get shared and retweeted on a large scale. Those who wish to downplay the impact on voters would claim the BuzzFeed numbers and other studies do not prove there was an effect on the election; others are alarmed and pushing for measures to limit “fake news” through legislative or regulatory measures.

Electoral Democracy vs. Freedom of Expression

It is the last tendency, this call to legislate fake news, where the two pillars of liberal democracy – elections for the orderly transition of power and constitutionally guaranteed freedom of expression – increasingly come into conflict. This conflict is likely to become more serious in coming years. This past June, Germany passed a law (the Netzwerkdurchsetzungsgesetz or Network Enforcement Act) mandating fines of up to 50 million Euros ($59 million USD) for platforms that fail to take down hate speech or fake news within 24 hours of its posting. Because of its own history with extremism, Germany has always been particularly strict on hate speech. Social media makes it more so.

The technical, jurisdictional and implementation problems with Germany’s (or any other democratic countries’ similar) approach are legion. But there are even graver problems. Illiberal regimes typically cherry-pick and copy-paste sections of Western legislation to avoid criticism that their own regimes were too heavy-handed. The Russian Duma, as is their wont, already has introduced a copy-cat bill of the German law, mandating removal of material deemed “illegal” within 24 hours.

Pressure to regulate fake news will increase. Some countries – the US, Estonia (consistently ranked No. 1 for internet freedom by Freedom House) and others will probably resist. But it is not clear how long this will continue if governments see a threat to democracy or even to centrist parties currently in office. Germany, after all, in 2016 was ranked in fourth place in internet freedom, just behind the US and Canada. Now, after elections in September and an extremist right-wing party, AFD, gaining 13% of the vote, social media is seen as a source of political upheaval.

In the absence of more self-policing by social media platforms, pressure to regulate over the issue of “fake news” will not recede.

Technological Threats: Bots, Big Data and Targeted Dark Ads

“Fake news” is a concept easily grasped and it has dominated politicians’ concerns. Yet a handful of new threats — or “attack vectors” — may prove to be more of a threat to maintaining democracy.

One of those vectors is “bots.” The Twittersphere especially has been deluged by bots — or robot accounts tweeting and retweeting stories — that generally are fake and often in the service of governments or extremist political groups tying to sway public opinion. NATO’s Center of Excellence for Strategic Communication, for example, recently reported that an astounding 84% of Russian-language Twitter messages about NATO’s presence in Eastern Europe were generated by bots. The assumption, of course, is that the more something is seen, the more likely it will be believed.

Russian and “Alt-Right” bot accounts have set their sights on a variety of issues. The hashtag #Syriahoax appeared immediately after news of a Syrian chemical gas attack and at one point was retweeted by a single source every five seconds. Automated bots and human-assisted accounts — from within both the US and Russia — attacked Republican Senator John McCain after he criticized President Trump for his response to the violent Charlottesville protests. Former FBI director James Comey and Senate Democratic Majority leader Charles Schumer also have come under attack from Russian-based Twitter-bots.

In my view, Twitter itself has not been particularly forthcoming in addressing these concerns. Again, as with news stories, the unanswered question remains their efficacy. While Twitter bots can attract a fair bit of attention in the Alt-Right press, we still cannot say how much they affect political discourse or the outcome of elections.

(Update on January 26, 2018: An earlier version of this piece inaccurately said Twitter no longer has an office in Germany. It also mischaracterized the company’s response to investigators seeking information.)

Big Data. An altogether different technological issue took hold during the 2016 US election: the use of “Big Data Analytics,” primarily by the company Cambridge Analytica and its affiliates. Research by a PhD student at Cambridge University, Michal Kosinski, demonstrated that Facebook likes provided a highly useful source for a personality assessment of Facebook users called OCEAN, which stands for: openness, conscientiousness, extroversion, agreeableness, neuroticism. Cambridge Analytica initially said it used the OCEAN test — considered the best of its kind — in its work with both the Trump campaign and the Leave campaign, leading up to the Brexit referendum in the UK. Their stance has shifted quite a bit, from boasting to laying low.

Big data analytics can provide a granular view of voter concerns and political leanings, which in turn provides a new way to target voters in political campaigns.

This is precisely what Cambridge Analytica originally claimed regarding both the Leave (or Brexit) campaign as well as the Trump campaign. Later, either due to legal, privacy or simply political reasons Cambridge Analytica and its affiliates walked back the original claims. Currently it is not possible to tell which claims are true and which are not

Dark Ads. Since the beginning of electoral politics, campaigns have relied on speeches, ads and commercials that were visible to all of the electorate. This new technology with its finely granular approach, for the first time allowed campaigns to tailor ads to individual voters.

People by now are accustomed to seeing ads related to their previous internet searches; one could say this merely represents the extension of a new advertising technology to the political sphere…

Except… The targeted political ads of the US presidential and UK referendum, were not public. Rather, they were “dark ads,” (or in Facebook language “unpublished posts”) seen only by individual users, based on the profiles gleaned from their internet use and other personal data. If highly granular voter profiling is an unfortunate but inevitable result of Big Data analytics, then the lack of transparency in Facebook’s dark advertising represented a significant step away from the norms of the democratic process. Voters, journalists and other commentators didn’t know what message was actually sent out as voters went to the poll. When ads are public, they are open to criticism, as they have been throughout history. Yet, some 80% of advertising dollars by the Leave campaign went to social media. In the case of the US election, even that figure is unknown, other than that the Trump campaign, by some accounts, reportedly spent roughly $70 million (USD) on Facebook.

Facebook now says it will make all ads on its platform public, not just political ads, and is preparing to make that change. Unfortunately, it is unclear what impact these ads had on the political process before this change was announced.

Facebook initially maintained its policy for political ads is the same as for commercial advertising and refused to publish the ads, their frequency, those who the ads targeted, or the audience size. Under pressure, it has now rethought those views. That’s a positive step for the democratic process. Voters and the press reporting on candidates are entitled to know the whole picture.

Quo Vadis?

With the dramatic convergence of social media and election technology, debate about these issues is outpacing our knowledge of what is taking place. Hampered by a dearth of research on the political effects of “fake news,” bots, dark ads, as well as social media companies’ recalcitrance to disclose real data, political debates have been ad hoc, emotional and ill-informed.

In many ways it’s a race: will governments and parliaments react on too little information with legislation that encroaches on fundamental freedoms? Or will they wait for enough facts before enacting what seems to be the inevitable regulation of social media, beginning in Europe. How long will governments wait as they see a continuation of meddling in public discussions through social media? I suspect not for long, as we have already seen in the case of Germany.

The power of social media today mirrors the power of companies during the Industrial Revolution — railroads, energy and water companies that we know today as “utilities” deemed so vital that they need to be regulated. This may be the direction liberal democratic governments take with social media companies — deeming them too big, too powerful, potentially too threatening for politicians to tolerate. Not only center-left politicians in “statist” Europe but right-wing political figures such as Steve Bannon speak of regulating social media as utilities. This already has become a major issue for Facebook, Twitter and other media in the liberal world. Elsewhere, where there is no electoral democracy, there is no debate.

Toomas Hendrik Ilves was President of Estonia from 2006 to 2016 and is now a distinguished visiting fellow at Stanford University’s Hoover Institution.

Guest Post: Is Social Media Good or Bad for Democracy?

By Ariadne Vromen, Professor of Political Sociology at the University of Sydney
This post is part of a series on social media and democracy.

Social media use is ubiquitous within the everyday lives of citizens. In Australia, where I live and teach on political participation, Facebook and smartphone use are among the highest in the world. Their ease of use and constant accessibility is changing our social networks and reshaping our political world.

I don’t minimize the potential challenge of issues like “fake news” or the “filter bubble.” They are real, serious and as yet untamed. And yet a technology that has the capacity to expand and diversify political equality around the world is a net good. Most other forms of political engagement tend to favor those with the most wealth or access. Not social media. It gives voice to anyone with a phone. In a time when political power is synonymous with economic power, the type of collective action social media makes possible is giving more people a say in the conduct of their governments and the society they live in.

Despite compulsory voting and a strong commitment to democratic processes, politicians and political parties are not held in very high regard in Australia. This trust in political actors has been trending downwards for some time. The barrier between ordinary people and elite political decision-making needs to be overcome if citizens are to view politics as relevant to their lives, and not just a space for adversarial and partisan conflict. Social media, through its informal, everyday use, offers the potential to improve the relationships between politics and citizens, and between citizens and citizens. Democracy is strengthened when online relationships are reciprocal and political elites are publicly pushed to be more responsive to everyday citizens and civil society organizations. On the other hand, if people speak up but no one actively listens, simply having a new communication and organizing tool will make no difference.

In Australia, these tendencies surfaced in 2017 over a national plebiscite to legalize same-sex marriage. Ordinary citizens used social media to express their own views, from changing their Facebook avatars to circulating petitions. Social media provided a unique space for forming and sharing opinions on this topic. Unlike many other advanced democracies, Australia has been very slow to recognize and legalize these unions. It has been a topic of heated public and legislative debate for over 10 years, and culminated in the conservative Liberal-National Party government’s decision to hold a non-binding, non-compulsory, mail-back survey of Australian citizens in November 2017, preceded by an eight-week campaign and voting period. In the end 80% of eligible Australians turned out to vote in the plebiscite, with 62% voting Yes. Same-sex marriage was legislated in the Australian national parliament on December 7.

Yet the debate between the “Yes” and “No” camps was polarizing. Many were fearful that LGBTQI Australians suffered long-term harms from the protracted campaign where they were subjected to online abuse, and had to repeatedly justify their access to basic rights. Others were concerned about the spread of misinformation online, from arguments that same-sex marriage would diminish religious freedom in Australia, to claims that it would change school sex education programs. The outcomes of this debate did not emerge from utopian views of “listening to the other side” in a march toward consensus. Instead, the social media-led campaigning became important for building political momentum among a population generally sympathetic to same-sex marriage. The reality is, consensus is not always possible. What is more important is for people to have a safe space to talk with friends and family about their principles.

Two features of these campaigns make it a revealing example of how integral social media has become to democratic politics. First, the reliance on “personal action frames” in online campaign materials. These are videos used by individuals to tell their own story as a means of exposing larger issues and building connections with those who are like-minded. This use of storytelling in collective action has traditionally been associated with progressive causes, and used effectively by prominent LGBTQI groups such as Australians for Marriage Equality (AME). Yet well-funded conservative groups such as the Australian Christian Lobby (ACL) also created videos and memes. Both campaigns spent large amounts on targeted social media advertising and less on traditional mail-out, television and newspapers ads.

Second, the social media-driven campaigns were shaped by formal politics and heightened partisan conflicts. Same-sex marriage divided the Liberal/National conservative government between its social conservatives and liberals, with socially conservative politicians openly joining religious organizations like ACL to lead the “No” campaign. The opposition party, the Australian Labor Party, publicly united for the “Yes” campaign, while some politicians worked with other smaller progressive parties to build momentum. While the politics of the plebiscite was polarized and conflict riven, much of the new coalition building reinvigorated linkages between politicians and civil society. Pro-marriage equality politicians had to work harder to get their base out for the plebiscite in a way that is very unusual in the Australian compulsory voting context.

This campaign also occurred within a growing ambivalence toward politics, especially among the young. Young women were often the public face of pro-marriage equality campaigning, and 74% of young people voted in the plebiscite. Their turnout was a crucial focus of the campaigns, particularly since this generation is the one most on social media. I have conducted research with young people under 30 and found a majority of them hear about and follow links to news stories about politics via Facebook. Unlike older generations, most young people don’t go straight to traditional media for their news. Facebook is their first port of call in finding out what is happening in the political world. A majority also believe that their social networks of friends and family are politically diverse. This is interesting as it is changing personal access to news into a social exchange among trusted family and friends. While this has the potential to greatly narrow young people’s information sources into just an “echo chamber,” it seems less likely if their online community is as ideologically diverse as they the said it was in our research. Other Australian research has also questioned the dominant idea of social media as an algorithmically driven echo chamber of the like-minded. Recent changes to Facebook’s News Feed may well reinforce this distribution of news and information’s focus on trusted friends and family networks.

Overall, however, we found that young people have considerable ambivalence toward injecting politics into their Facebook accounts. Many expressed a general skepticism, and had feelings of exclusion from formal electoral politics. Our in-depth qualitative analysis showed that young people equate politics with conflict, and something to be avoided on social media. Who needs disagreements with friends and family? Others worried about saying something that was “wrong;” some worried about surveillance and censure by current and future workplaces; and some just wanted to keep social media a place for purely social interactions. Politics, they thought, should be separate.

Many young people are concerned about the incivility of democratic politics and don’t see it as relevant to their lives and the issues that matter to them. The current rise in divisive politics that enables racism, sexism and other forms of exclusion and discrimination only further alienates ordinary citizens. Thus, the barriers to political engagement have more to do with the structure, openness and actions of formal political institutions than the advent of social media. If we want young people and all citizens to feel heard, then the adversarial nature of formal politics itself needs to change. In an era of ongoing distrust of parties and politicians, and despite vibrant online campaigning, social media alone will not be the great democratic panacea. Nor is social media the root cause of political incivility.

Social media platforms weren’t set up to be either news or political organizations, but to a large extent that is what they have become. Increasingly questions arise about where the responsibility to oversee and regulate platforms ought to come from. With these concerns in mind, I recently completed a new project on Digital Rights in Governance in Australia. We asked Australians what they thought about their online privacy, the use of their social media data by their workplaces, governments and third-party advertisers, and threats to their free speech online. Australians recognize that they often trade personal data for the use of platforms and free apps. Yet they also want more protection of their individual privacy, and expect civility and safety online. A majority are concerned about their privacy being violated by corporations, and nearly 80% want to know what social media companies do with their personal data. They think that some use of data analytics and targeting by advertisers is beyond the pale, especially during elections. Australians particularly wanted more regulation of online discussion forums and for it to be easier to have personally harmful content removed. In this situation social media platforms need to have greater involvement in content moderation and ensure they are providing easy, responsive complaints reporting.

But neither governments or social media companies alone can improve the digital rights of citizens. They both have self-serving interests in collecting data to either monetize or use for partisan or security reasons. The boundaries on this must be set by strong civil society organizations that represent the interests of ordinary citizens.

My examples of the same-sex marriage plebiscite, young people and politics, and the digital rights agenda demonstrate that the dividing line between social media as “good” or “bad” for democracy is porous and shifting. Social media can easily become a democratic “bad” when there is a breakdown in civility, political polarization increases and targeted misinformation spreads.

Inherently, however, I believe social media is a net “good” for civic engagement. Whether that remains so rests in part with Facebook, Twitter and all the companies that operate these platforms, and their willingness to play a more active and transparent role in working with civil society organizations to protect the networks they created.

Ariadne Vromen is Professor of Political Sociology at the University of Sydney, Australia. She has undertaken extensive research on young people’s participation on social media, including her recent collaborative project, The Civic Network, on young people’s use of social media for politics. Her book Digital Citizenship and Political Engagement was published early in 2017 by Palgrave Macmillan; and the open access report, Digital Rights and Governance in Australia, was published in November 2017.

News Feed FYI: Replacing Disputed Flags with Related Articles

By Tessa Lyons, Product Manager

Facebook is about connecting you to the people that matter most. And discussing the news can be one way to start a meaningful conversation with friends or family. It’s why helping to ensure that you get accurate information on Facebook is so important to us.

Today, we’re announcing two changes which we believe will help in our fight against false news. First, we will no longer use Disputed Flags to identify false news. Instead we’ll use Related Articles to help give people more context about the story. Here’s why.

Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended. Related Articles, by contrast, are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts. Indeed, we’ve found that when we show Related Articles next to a false news story, it leads to fewer shares than when the Disputed Flag is shown.

Second, we are starting a new initiative to better understand how people decide whether information is accurate or not based on the news sources they depend upon. This will not directly impact News Feed in the near term. However, it may help us better measure our success in improving the quality of information on Facebook over time.

False news undermines the unique value that Facebook offers: the ability for you to connect with family and friends in meaningful ways. It’s why we’re investing in better technology and more people to help prevent the spread of misinformation. Overall, we’re making progress. Demoting false news (as identified by fact-checkers) is one of our best weapons because demoted articles typically lose 80 percent of their traffic. This destroys the economic incentives spammers and troll farms have to generate these articles in the first place.

But there’s much more to do. By showing Related Articles rather than Disputed Flags we can help give people better context. And understanding how people decide what’s false and what’s not will be crucial to our success over time. Please keep giving us your feedback because we’ll be redoubling our efforts in 2018.

New Tools to Prevent Harassment

By Antigone Davis, Global Head of Safety

Today we are announcing new tools to prevent harassment on Facebook and in Messenger – part of our ongoing efforts to build a safe community.

Based on feedback from people who use Facebook, as well as organizations representing groups who disproportionately experience harassment like women and journalists, we are introducing new features that:

  • Proactively recognize and help prevent unwanted contact like friend requests and messages when someone you blocked sets up a new account or tries to contact you from another account they control
  • Provide the option to ignore a Messenger conversation and automatically move it out of your inbox, without having to block the sender

We already prohibit bullying and harassment on Facebook, and people can let us know when they see something concerning or have a bad experience. We review reports and take action on abuse, like removing content, disabling accounts, and limiting certain features like commenting for people who have violated our Community Standards. People can also control what they share, who they share it with, and who can communicate with them. These new features for personal profiles give people additional ways to manage their experience on Facebook.

Preventing unwanted contact

We’ve heard stories from people who have blocked someone only to encounter the same harasser using a different account. In order to help prevent those bad encounters, we are building on existing features that prevent fake and inauthentic accounts on Facebook.

These automated features help us identify fake accounts more quickly and block millions of them at registration every day. However, sometimes a new account created by someone who was previously blocked might not get caught by these features.

We are now using various signals (like an IP address) to help us proactively recognize this type of account and prevent its owner from sending a message or friend request to the person who blocked the original account. The person who blocked the original account is in control, and must initiate contact with the new account in order for them to interact normally.

Ignoring messages

If someone is being harassed, blocking the abuser sometimes prompts additional harassment, particularly offline. We’ve also heard from groups that work with survivors of domestic violence that being able to see messages is often a valuable tool to assess if there is risk of additional abuse.

Now, you can tap on a message to ignore the conversation. This disables notifications and moves the conversation from your inbox to your Filtered Messages folder. You can read messages in the conversation without the sender seeing if they’ve been read. This feature is now available for one on one conversations and will soon be available broadly for group messages, too.

Working with experts

Facebook works with experts in a variety of fields to provide safety resources to people. For example, we’ve developed new resources for survivors of domestic violence in partnership with the National Network to End Domestic Violence. This is in addition to our work with more than 150 safety experts over the last year in India, Ireland, Kenya, the Netherlands, Spain, Turkey, Sweden and the US to get feedback on ways we can improve.

We have also convened roundtables with the Facebook Journalism Project to learn more about the unique experiences of the journalist community on Facebook. This culminated in the features we’re making available today, as well as resources for journalists to help them protect their themselves on Facebook.

Managing Your Identity on Facebook with Face Recognition Technology

By Joaquin Quiñonero Candela, Director, Applied Machine Learning

Today we’re announcing new, optional tools to help people better manage their identity on Facebook using face recognition. Powered by the same technology we’ve used to suggest friends you may want to tag in photos or videos, these new features help you find photos that you’re not tagged in and help you detect when others might be attempting to use your image as their profile picture. We’re also introducing a way for people who are visually impaired to know more about who is in the photos they encounter on Facebook.

People gave us feedback that they would find it easier to manage face recognition through a simple setting, so we’re pairing these tools with a single “on/off” control. If your tag suggestions setting is currently set to “none,” then your default face recognition setting will be set to “off” and will remain that way until you decide to change it.

Know When You Appear in Photos on Facebook

Now, if you’re in a photo and are part of the audience for that post, we’ll notify you, even if you haven’t been tagged. You’re in control of your image on Facebook and can make choices such as whether to tag yourself, leave yourself untagged, or reach out to the person who posted the photo if you have concerns about it. We always respect the privacy setting people select when posting a photo on Facebook (whether that’s friends, public or a custom audience), so you won’t receive a notification if you’re not in the audience.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Profile Photo Safety

We want people to feel confident when they post pictures of themselves on Facebook so we’ll soon begin using face recognition technology to let people know when someone else uploads a photo of them as their profile picture. We’re doing this to prevent people from impersonating others on Facebook.

New Tools for People with Visual Impairments

We’re always working to make it easier for all people, regardless of ability, to access Facebook, make connections and have more opportunities. Two years ago, we launched an automatic alt-text tool, which describes photos to people with vision loss. Now, with face recognition, people who use screen readers will know who appears in photos in their News Feed even if people aren’t tagged.

How it Works and the Choices You Have

Since 2010, face recognition technology has helped bring people closer together on Facebook. Our technology analyzes the pixels in photos you’re already tagged in and generates a string of numbers we call a template. When photos and videos are uploaded to our systems, we compare those images to the template.

You control whether Facebook can recognize you in photos and videos. Soon, you will begin to see a simple on/off switch instead of settings for individual features that use face recognition technology. We designed this as an on/off switch because people gave us feedback that they prefer a simpler control than having to decide for every single feature using face recognition technology. To learn more about all of these features, visit the Help Center or your account settings.

We are introducing these new features in most places, except in Canada and the EU where we don’t currently offer face recognition technology.

Hard Questions: Should I Be Afraid of Face Recognition Technology?

By Rob Sherman, Deputy Chief Privacy Officer

The words “face recognition” can make some people feel uneasy, conjuring dystopian scenes from science fiction. Can someone use it to identify strangers on the street? Are institutions gathering mass databases of images that can be used to invade someone’s privacy or rights?

As government and non-government agencies, companies and others use face recognition technology in new ways, people want to understand how their privacy is being protected and what choices they have over how this technology is used.

Like many tools, face recognition can be used for good purposes — like helping people securely unlock their mobile devices, log into their bank accounts and make digital payments. It can help people organize their photos and share them with friends. It’s even being used to find missing and kidnapped children and to help officials confirm whether travelers have authentic passports.

But it can also be used in concerning ways. Some have raised concerns about how law enforcement uses the technology. Others have called attention to the potential for racial bias, arguing that facial recognition systems are more likely either to misidentify or fail to identify African Americans than people of other races. And while there have been proposals to regulate face recognition, there’s no consensus on whether or how to do so, and some approaches have been criticized for failing to focus on the most harmful potential uses.

This tension isn’t new. Society often welcomes the benefit of a new innovation while struggling to harness its potential. “Beware the Kodak,” one newspaper intoned in 1888 as inexpensive equipment came onto the market making photography available to the masses. They called it a “new terror for the picnic.” Confronting amateur photography for the first time, society could have restricted this technology – and fundamentally changed the way history was documented for more than a century. Instead, regulators took action on uses that raised concerns — for example, by prohibiting stalking or letting people sue for invasion of privacy — rather than requiring licenses to use “camera technology” or written consent forms before a person could appear in a photo. As a result, people became familiar with these early cameras, social norms evolved, and the world decided that the benefits of personal photography far outweighed the risks.

 Face Recognition and Facebook

On Facebook, face recognition helps people tag photos with the names of their friends. When you have face recognition enabled, our technology analyzes the pixels in photos you’re already tagged in and generates a string of numbers we call a template. When photos and videos are uploaded to our systems, we compare those images to the template. Here’s a video explaining how it works:

When we first introduced this feature in 2010, there was no industry standard for how people should be able to control face recognition.We decided to notify people on Facebook and provide a way to disable it in their account settings at any time.

We recently announced new features that use face recognition technology. People can now find photos of themselves even when they aren’t tagged in them, making it possible for people to manage their privacy in new ways. They may also know when someone is using their image as a profile photo — which can help stop impersonation. In addition, those with vision impairments can now hear aloud who’s in the photos they come across on Facebook. Just as in 2010, we had to evaluate how we’d inform people and give them choice over these new uses of the technology.

Our Responsibility

When it comes to face recognition, control matters. We listen carefully to feedback from people who use Facebook, as well as from experts in the field. We believe we have a responsibility to build these features in ways that deliver on the technology’s promise, while avoiding harmful ways that some might use it.

Our team has been working for more than a year to collect and respond to feedback on how people want to see us use this technology — and how we can do it most responsibly. People asked us to explain how face recognition works more clearly, and to provide more prominent information about how we might use it on Facebook. To address this feedback, we’re informing people about updates to face recognition in News Feed – the doorstep of Facebook.

We also decided to update Facebook’s settings. Concerns about updated settings are as old as Facebook, so we didn’t take the decision lightly. But we learned in our research that people want a way to completely turn off face recognition technology rather than on a feature-by-feature basis. We knew that as we introduced more features using this technology, most people would find it easier to manage one master setting rather than navigate a long list of products deciding what they want and what they don’t. Our new setting is an on/off switch. Some may criticize this as an “all or nothing” approach, but we believe this will prevent people from having to make additional decisions among potentially confusing options.

Finally, we aren’t introducing, and have no plans to introduce, features that tell strangers who you are. This was a common concern we heard from people when we researched new features that rely on face recognition technology.

 Moving Forward

As people use features on Facebook that use face recognition technology, we’ll learn more about what our community thinks of them — good, bad or in between. And we’ll learn what they think of our updated controls. We’ll build on these lessons and keep people informed about the work we’re doing to innovate and responsibly use this technology on Facebook.

Of course, it’s too early to know if face recognition will follow the path of the personal camera. But we look forward to the public’s feedback and to working with other companies and organizations as we continue to listen and learn.

Read more about our blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to hardquestions@fb.com.

Reinforcing Our Commitment to Transparency

By Chris Sonderby, Deputy General Counsel

Today we are releasing our Transparency Report, previously called the Government Requests Report, for the first half of 2017. For the first time, we are expanding the report beyond government requests to provide data regarding reports from rights holders related to intellectual property (IP) — covering copyright, trademark, and counterfeit. The report also includes the same categories of information we’ve disclosed in the past, with updates on government requests for account data, content restrictions, and internet disruptions.

We believe that sharing information about IP reports we receive from rights holders is an important step toward being more open and clear about how we protect the people and businesses that use our services. Our Transparency Report describes these policies and procedures in more detail, along with the steps we’ve taken to safeguard the people who use Facebook and keep them informed about IP. It also includes data covering the volume and nature of copyright, trademark, and counterfeit reports we’ve received and the amount of content affected by those reports. For example, in the first half of 2017, we received 224,464 copyright reports about content on Facebook, 41,854 trademark reports, and 14,279 counterfeit reports.

In addition to our new section on intellectual property, we are also providing our usual twice-a-year update on government requests for account data, content restrictions based on local law, and information about internet disruptions in the first half of this year.

Requests for account data increased by 21% globally compared to the second half of 2016, from 64,279 to 78,890. Fifty-seven percent of the data requests we received from law enforcement in the U.S. contained a non-disclosure order that prohibited us from notifying the user, up from 50% in our last report. Additionally, as a result of transparency reforms introduced in 2016 by the USA Freedom Act, the U.S. government notified us that it was lifting the non-disclosure order on five National Security Letters (NSLs) we previously received between 2012 and 2015. Copies of the NSLs, as well as the government’s authorization letters are available for download below.

We continue to carefully scrutinize each request we receive for account data — whether from an authority in the U.S., Europe, or elsewhere — to make sure it is legally sufficient. If a request appears to be deficient or overly broad, we push back, and will fight in court, if necessary. We’ll also keep working with partners in industry and civil society to encourage governments around the world to reform surveillance in a way that protects their citizens’ safety and security while respecting their rights and freedoms.

Overall, the number of content restrictions for violating local law increased by 304% globally, compared to the second half of 2016, from 6,944 to 28,036. This increase was primarily driven by a request from Mexican law enforcement to remove instances of a video depicting a school shooting in Monterrey in January. We restricted access in Mexico to 20,506 instances of the video in the first half of 2017.

Meanwhile, there were 52 disruptions of Facebook services in nine countries in the first half of 2017, compared to 43 disruptions in 20 countries in the second half of 2016. We continue to be deeply concerned by internet disruptions, which can create barriers for businesses and prevent people from sharing and communicating with their family and friends.

Publishing this report reinforces our important commitment to transparency as we build community and bring the world closer together.

Please see the full report for more information.

News Feed FYI: Fighting Engagement Bait on Facebook

By Henry Silverman, Operations Integrity Specialist and Lin Huang, Engineer

People have told us that they dislike spammy posts on Facebook that goad them into interacting with likes, shares, comments, and other actions. For example, “LIKE this if you’re an Aries!” This tactic, known as “engagement bait,” seeks to take advantage of our News Feed algorithm by boosting engagement in order to get greater reach. So, starting this week, we will begin demoting individual posts from people and Pages that use engagement bait.

To help us foster more authentic engagement, teams at Facebook have reviewed and categorized hundreds of thousands of posts to inform a machine learning model that can detect different types of engagement bait. Posts that use this tactic will be shown less in News Feed.

Additionally, over the coming weeks, we will begin implementing stricter demotions for Pages that systematically and repeatedly use engagement bait to artificially gain reach in News Feed. We will roll out this Page-level demotion over the course of several weeks to give publishers time to adapt and avoid inadvertently using engagement bait in their posts. Moving forward, we will continue to find ways to improve and scale our efforts to reduce engagement bait.

Posts that ask people for help, advice, or recommendations, such as circulating a missing child report, raising money for a cause, or asking for travel tips, will not be adversely impacted by this update.

Instead, we will demote posts that go against one of our key News Feed values — authenticity. Similar to our other recent efforts to demote clickbait headlines and links to low-quality web page experiences, we want to reduce the spread of content that is spammy, sensational, or misleading in order to promote more meaningful and authentic conversations on Facebook.

How will this impact Pages?
Publishers and other businesses that use engagement bait tactics in their posts should expect their reach on these posts to decrease. Meanwhile, Pages that repeatedly share engagement bait posts will see more significant drops in reach. Page Admins should continue to focus on posting relevant and meaningful stories that do not use engagement bait tactics. Learn more about engagement bait and how to avoid using it here.

Hard Questions: Is Spending Time on Social Media Bad for Us?

By David Ginsberg, Director of Research, and Moira Burke, Research Scientist at Facebook

With people spending more time on social media, many rightly wonder whether that time is good for us. Do people connect in meaningful ways online? Or are they simply consuming trivial updates and polarizing memes at the expense of time with loved ones?

These are critical questions for Silicon Valley — and for both of us. Moira is a social psychologist who has studied the impact of the internet on people’s lives for more than a decade, and I lead the research team for the Facebook app. As parents, each of us worries about our kids’ screen time and what “connection” will mean in 15 years. We also worry about spending too much time on our phones when we should be paying attention to our families. One of the ways we combat our inner struggles is with research — reviewing what others have found, conducting our own, and asking questions when we need to learn more.

A lot of smart people are looking at different aspects of this important issue. Psychologist Sherry Turkle asserts that mobile phones redefine modern relationships, making us “alone together.” In her generational analyses of teens, psychologist Jean Twenge notes an increase in teen depression corresponding with technology use. Both offer compelling research.

But it’s not the whole story. Sociologist Claude Fischer argues that claims that technology drives us apart are largely supported by anecdotes and ignore the benefits. Sociologist Keith Hampton’s study of public spaces suggests that people spend more time in public now — and that cell phones in public are more often used by people passing time on their own, rather than ignoring friends in person.

We want Facebook to be a place for meaningful interactions with your friends and family — enhancing your relationships offline, not detracting from them. After all, that’s what Facebook has always been about. This is important as we know that a person’s health and happiness relies heavily on the strength of their relationships.

In this post, we want to give you some insights into how the research team at Facebook works with our product teams to incorporate well-being principles, and review some of the top scientific research on well-being and social media that informs our work. Of course, this isn’t just a Facebook issue — it’s an internet issue — so we collaborate with leading experts and publish in the top peer-reviewed journals. We work with scientists like Robert Kraut at Carnegie Mellon; Sonja Lyubomirsky at UC Riverside; Dacher Keltner, Emiliana Simon-Thomas, and Matt Killingsworth from the Greater Good Science Center at UC Berkeley, and have partnered closely with mental health clinicians and organizations like Save.org and the National Suicide Prevention Lifeline.

What Do Academics Say? Is Social Media Good or Bad for Well-Being?

According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse.

The bad: In general, when people spend a lot of time passively consuming information — reading but not interacting with people — they report feeling worse afterward. In one experiment, University of Michigan students randomly assigned to read Facebook for 10 minutes were in a worse mood at the end of the day than students assigned to post or talk to friends on Facebook. A study from UC San Diego and Yale found that people who clicked on about four times as many links as the average person, or who liked twice as many posts, reported worse mental health than average in a survey. Though the causes aren’t clear, researchers hypothesize that reading about others online might lead to negative social comparison — and perhaps even more so than offline, since people’s posts are often more curated and flattering. Another theory is that the internet takes people away from social engagement in person.

The good: On the other hand, actively interacting with people — especially sharing messages, posts and comments with close friends and reminiscing about past interactions — is linked to improvements in well-being. This ability to connect with relatives, classmates, and colleagues is what drew many of us to Facebook in the first place, and it’s no surprise that staying in touch with these friends and loved ones brings us joy and strengthens our sense of community.

A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness. The positive effects were even stronger when people talked with their close friends online. Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network. Other peer-reviewed longitudinal research and experiments have found similar positive benefits between well-being and active engagement on Facebook.

In an experiment at Cornell, stressed college students randomly assigned to scroll through their own Facebook profiles for five minutes experienced boosts in self-affirmation compared to students who looked at a stranger’s Facebook profile. The researchers believe self-affirmation comes from reminiscing on past meaningful interactions — seeing photos they had been tagged in and comments their friends had left — as well as reflecting on one’s own past posts, where a person chooses how to present themselves to the world.

In a follow-up study, the Cornell researchers put other students under stress by giving them negative feedback on a test and then gave them a choice of websites to visit afterward, including Facebook, YouTube, online music and online video games. They found that stressed students were twice as likely to choose Facebook to make themselves feel better as compared with students who hadn’t been put under stress.

In sum, our research and other academic literature suggests that it’s about how you use social media that matters when it comes to your well-being.

So what are we doing about it?

We’re working to make Facebook more about social interaction and less about spending time. As our CEO Mark Zuckerberg recently said, “We want the time people spend on Facebook to encourage meaningful social interactions.” Facebook has always been about bringing people together — from the early days when we started reminding people about their friends’ birthdays, to showing people their memories with friends using the feature we call “On This Day.” We’re also a place for people to come together in times of need, from fundraisers for disaster relief to groups where people can find an organ donor. We’re always working to expand these communities and find new ways to have a positive impact on people’s lives.

We employ social psychologists, social scientists and sociologists, and we collaborate with top scholars to better understand well-being and work to make Facebook a place that contributes in a positive way. Here are a few things we’ve worked on recently to help support people’s well-being.

News Feed quality: We’ve made several changes to News Feed to provide more opportunities for meaningful interactions and reduce passive consumption of low-quality content — even if it decreases some of our engagement metrics in the short term. We demote things like clickbait headlines and false news, even though people often click on those links at a high rate. We optimize ranking so posts from the friends you care about most are more likely to appear at the top of your feed because that’s what people tell us in surveys that they want to see. Similarly, our ranking promotes posts that are personally informative. We also recently redesigned the comments feature to foster better conversations.

Snooze: People often tell us they want more say over what they see in News Feed. Today, we launched Snooze, which gives people the option to hide a person, Page or group for 30 days, without having to permanently unfollow or unfriend them. This will give people more control over their feed and hopefully make their experience more positive.

Take a Break: Millions of people break up on Facebook each week, changing their relationship status from “in a relationship” to “single.” Research on peoples’ experiences after breakups suggests that offline and online contact, including seeing an ex-partner’s activities, can make emotional recovery more difficult. To help make this experience easier, we built a tool called Take a Break, which gives people more centralized control over when they see their ex on Facebook, what their ex can see, and who can see their past posts.

Suicide prevention tools: Research shows that social support can help prevent suicide. Facebook is in a unique position to connect people in distress with resources that can help. We work with people and organizations around the world to develop support options for people posting about suicide on Facebook, including reaching out to a friend, contacting help lines and reading tips about things they can do in that moment. We recently released suicide prevention support on Facebook Live and introduced artificial intelligence to detect suicidal posts even before they are reported. We also connect people more broadly with mental health resources, including support groups on Facebook.

What About Related Areas Like Digital Distraction and the Impact of Technology on Kids?

We know that people are concerned about how technology affects our attention spans and relationships, as well as how it affects children in the long run. We agree these are critically important questions, and we all have a lot more to learn.

That’s why we recently pledged $1 million toward research to better understand the relationship between media technologies, youth development and well-being. We’re teaming up with experts in the field to look at the impact of mobile technology and social media on kids and teens, as well as how to better support them as they transition through different stages of life.

We’re also making investments to better understand digital distraction and the factors that can pull people away from important face-to-face interactions. Is multitasking hurting our personal relationships? How about our ability to focus? Next year we’ll host a summit with academics and other industry leaders to tackle these issues together.

We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.

News Feed FYI: Introducing Snooze to Give You More Control Of Your News Feed

By Shruthi Muraleedharan, Product Manager

One of our core News Feed values is giving people more control. Over the next week, we’re launching Snooze, which will give you the option to temporarily unfollow a person, Page or group for 30 days. By selecting Snooze in the top-right drop-down menu of a post, you won’t see content from those people, Pages or groups in your News Feed for that time period.

Seeing too many photos of your uncle’s new cat? Is your friend tempting you with endless photos of ramen on her Japan trip? It turns out, you’re not alone. We’ve heard from people that they want more options to determine what they see in News Feed and when they see it. With Snooze, you don’t have to unfollow or unfriend permanently, rather just stop seeing someone’s posts for a short period of time. The people, Pages, and groups you snooze will not be notified. You will be notified before the Snooze period is about to end and the setting can also be reversed at any time.

Controls for your News Feed aren’t new. With features like Unfollow, Hide, Report and See First, we’ve consistently worked toward helping people tailor their News Feed experience, so the time they spend on Facebook is time well spent. As News Feed evolves, we’ll continue to provide easy-to-use tools to give you the most personalized experience possible every time you visit Facebook.

News Feed FYI: For Video, Intent and Repeat Viewership Matter

Today, we’re making an update to News Feed ranking that will help surface videos people are proactively seeking out and coming back to on Facebook. This change takes two factors into account:

  • Intent matters. With this update, videos from Pages that people proactively seek out, by using Search or going directly to a Page for example, will see greater distribution on News Feed.
  • Repeat viewership matters. Also with this update, we will show more videos in News Feed that people return to watch from the same publisher or creator week after week.

What this means for my Page
As we’ve said, watching video on Facebook has the power to drive conversations, and News Feed remains a place people discover and watch videos. Engaging videos that not only bring people together, but drive repeat viewership and engagement, will do well in News Feed.

For more on this update, you can visit the Media Blog.

Messenger’s 2017 Year in Review

By Sean Kelly, Product Management Director, Messenger

Over the holidays we all want to take time to celebrate, reflect and stay in touch with the people we love. It’s been 25 years since the first text message was sent, sparking a revolution in how we keep in touch with each other, and we’ve come a long way since then. The art of conversation has evolved, and we’re no longer limited by just text. Just think about it, now you can group video chat with masks, choose from thousands of emojis or GIFs to add more color to your messages, and immediately capture and share photos, even when you’re already in a conversation.

At Messenger, we know that every message matters and we’re focused on helping people say what they want to say, however they want to say it. Through GIFs, videos, group conversations or group video chats, Messenger gives people the freedom to connect in the way that is most relevant to them — expressive, humorous, visual, heartfelt or simply convenient.

By understanding how people are messaging today, we can continue to make Messenger the best place to connect with the people you care about most. We’re excited to look back over the year, and highlight the top ways we saw Messenger’s 1.3 billion strong global community connect and share with each other in 2017.

Video chat took a leap forward.

Chatting face-to-face is perfect for those spontaneous moments when text just isn’t enough. We heard from people that they wanted more than just one-to-one video chats, which is why we launched group video chat about a year ago. The experience is the same whether you’re on Android or iOS — and we introduced a few new Augmented Reality features – like masks, filters and reactions – in June to make your video chats more fun and expressive.

  • Overall, there were 17 billion realtime video chats on Messenger, marking two times as many video chat sessions in 2017 compared to 2016.
  • People video chatted across each other all around the world — including Antarctica! We can only imagine how awesome it was to share a moment in front of icebergs and penguins with friends and family back home.


Visual messaging brings our conversations to life.

Visual messaging is now our new universal language, making our conversations more joyful, impactful, and let’s face it, a whole lot more fun! This year we continued investing in our powerful and fast camera, pre-loaded with thousands of stickers, frames and other effects to make your conversations better than ever. Here’s how people expressed themselves and added delight to their conversations this year:

  • People shared over 500 billion emojis in 2017 😲, or nearly 1.7 billion every day 😍
  • GIFs are a popular choice too, with 18 billion GIFs shared in 2017


We’re connecting more than ever.

We believe in the power of messaging to make meaningful connections. While some cultural pundits would argue that messaging makes us isolated, what we’ve found is that messaging actually brings us closer together.

  • On average, there are over 7 billion conversations taking place on Messenger every day in 2017.
  • At the same time, on average, 260 million new conversation threads were started every day in 2017.
  • The holidays are popular times to connect with each other online as well as offline. New Years, Mother’s Day, and Valentine’s Day were three of the top five most active days for chats on Messenger.


And it’s all about the power of groups.

This year we introduced several new features to make group chats in Messenger more fun and useful, including @mentions, now it’s easy to jump right back in to the conversation to answer someone’s question or to provide a response, and reactions, giving you an option quickly showing acknowledgement or expressing how you feel in an easy way with an emoji. We found that 2017 was both a year to connect with the people you care about most, as well as the groups of people you care about most:

  • In 2017, 2.5 million new groups were created on Messenger EVERY day
  • The average group chat includes 10 people.
  • Since launching in March, people shared more than 11 billion reactions, up from two billion shared in June. The most popular reaction in a group conversation is 😆
  • Of course, reactions are just as fun in 1:1 conversations, too. The most popular reaction in 1:1 conversations is 😍
  • We offer people on Messenger fun ways to customize their group chats. The most popular custom emoji is the red heart, and the most popular custom chat color is red.

As the year comes to a close, we want to extend a big thank you to our Messenger community – we are so happy to be part of your every day lives and we can’t wait to help you chat longer, play games, take great photos or message a business in 2018! Thank you for trusting us with your messages that matter.

*Methodology – the Messenger data is reflective of January 2017 through November 2017

Scroll Up