Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

Removing Myanmar Military Officials From Facebook

The ethnic violence in Myanmar has been truly horrific. Earlier this month, we shared an update on the steps we’re taking to prevent the spread of hate and misinformation on Facebook. While we were too slow to act, we’re now making progress – with better technology to identify hate speech, improved reporting tools, and more people to review content.

Today, we are taking more action in Myanmar, removing a total of 18 Facebook accounts, one Instagram account and 52 Facebook Pages, followed by almost 12 million people. We are preserving data, including content, on the accounts and Pages we have removed.

Specifically, we are banning 20 individuals and organizations from Facebook in Myanmar — including Senior General Min Aung Hlaing, commander-in-chief of the armed forces, and the military’s Myawady television network. International experts, most recently in a report by the UN Human Rights Council-authorized Fact-Finding Mission on Myanmar, have found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country. And we want to prevent them from using our service to further inflame ethnic and religious tensions. This has led us to remove six Pages and six accounts from Facebook — and one account from Instagram — which are connected to these individuals and organizations. We have not found a presence on Facebook or Instagram for all 20 individuals and organizations we are banning.

We have also removed 46 Pages and 12 accounts for engaging in coordinated inauthentic behavior on Facebook. During a recent investigation, we discovered that they used seemingly independent news and opinion Pages to covertly push the messages of the Myanmar military. This type of behavior is banned on Facebook because we want people to be able to trust the connections they make.

We continue to work to prevent the misuse of Facebook in Myanmar — including through the independent human rights impact assessment we commissioned earlier in the year. This is a huge responsibility given so many people there rely on Facebook for information — more so than in almost any other country given the nascent state of the news media and the recent rapid adoption of mobile phones. It’s why we’re so determined to do better in the future.

A sample of the content from these Pages and accounts is included below.

The following pieces of content violate our Community Standards and were removed from Facebook.

An Update on Our App Investigation

By Ime Archibong, VP of Product Partnerships

Today we banned myPersonality — an app that was mainly active prior to 2012 — from Facebook for failing to agree to our request to audit and because it’s clear that they shared information with researchers as well as companies with only limited protections in place. As a result we will notify the roughly 4 million people who chose to share their Facebook information with myPersonality that it may have been misused. Given we currently have no evidence that myPersonality accessed any friends’ information, we will not be notifying these people’s Facebook friends. Should that change, we will notify them.

Since launching our investigation in March, we have investigated thousands of apps. And we have suspended more than 400 due to concerns around the developers who built them or how the information people chose to share with the app may have been used — which we are now investigating in much greater depth.

It’s also why we’ve changed many of our policies — such as our expansion of App Review and our new policy that no information will be shared with apps if you haven’t used them in 90 days. We will continue to investigate apps and make the changes needed to our platform to ensure that we are doing all we can to protect people’s information.

Introducing the Ad Archive API

By Rob Leathern, Director of Product Management

We’re making advertising more transparent to help prevent abuse on Facebook, especially during elections. Today we’re starting to roll out the Ad Archive API, so researchers and journalists can more easily analyze Facebook ads related to politics or issues of national importance.

Beginning with a group of publishers, academics and researchers in the US, we’ll learn what’s most useful and how to improve the API before opening it up more broadly. Input from this group will also form the basis of an Archive report that will be available starting in September.

The API offers ad creative, start and end date, and performance data, including total spend and impressions for ads. It also shows demographics of people reached, including age, gender and location.

We’re greatly encouraged by trends and insights that watchdog groups, publishers and academics have unearthed since the archive launched in May. We believe this deeper analysis will increase accountability for both Facebook and advertisers.

If you’d like to access the API, please submit this form and we’ll respond after this initial test.

Taking Down More Coordinated Inauthentic Behavior

Today we removed multiple Pages, groups and accounts for coordinated inauthentic behavior on Facebook and Instagram. Some of this activity originated in Iran, and some originated in Russia. These were distinct campaigns and we have not identified any link or coordination between them. However, they used similar tactics by creating networks of accounts to mislead others about who they were and what they were doing.

We ban this kind of behavior because we want people to be able to trust the connections they make on Facebook. And while we’re making progress rooting out this abuse, as we’ve said before, it’s an ongoing challenge because the people responsible are determined and well funded. We constantly have to improve to stay ahead. That means building better technology, hiring more people and working more closely with law enforcement, security experts and other companies. Their collaboration was critical to our investigation since no one company can fight this on their own.

There is always a tension between taking down these bad actors quickly and improving our defenses over the long term. If we remove them too early, it’s harder to understand their playbook and the extent of their network. It also limits our ability to coordinate with law enforcement, who often have investigations of their own. It’s why we’ve investigated some of these campaigns for many months and why we will continue working to find out more. We’ll update this post with more details when we have them, or if the facts change.

What We’ve Found So Far
When to Take Action Against Cyber Threats
Sample Content
Press Call Transcript 

August 21, 2018

Updated on August 21, 2018 at 9:54PM PT to include additional sample posts.

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

We’ve removed 652 Pages, groups and accounts for coordinated inauthentic behavior that originated in Iran and targeted people across multiple internet services in the Middle East, Latin America, UK and US. FireEye, a cybersecurity firm, gave us a tip in July about “Liberty Front Press,” a network of Facebook Pages as well as accounts on other online services. They’ve published an initial analysis and will release a full report of their findings soon. We wanted to take this opportunity to thank them for their work.

Based on FireEye’s tip, we started an investigation into “Liberty Front Press” and identified additional accounts and Pages from their network. We are able to link this network to Iranian state media through publicly available website registration information, as well as the use of related IP addresses and Facebook Pages sharing the same admins. For example, one part of the network, “Quest 4 Truth,” claims to be an independent Iranian media organization, but is in fact linked to Press TV, an English-language news network affiliated with Iranian state media. The first “Liberty Front Press” accounts we’ve found were created in 2013. Some of them attempted to conceal their location, and they primarily posted political content focused on the Middle East, as well as the UK, US, and Latin America. Beginning in 2017, they increased their focus on the UK and US. Accounts and Pages linked to “Liberty Front Press” typically posed as news and civil society organizations sharing information in multiple countries without revealing their true identity.

  • Presence on Facebook and Instagram:  74 Pages, 70 accounts, and 3 groups on Facebook, as well as 76 accounts on Instagram.
  • Followers:  About 155,000 accounts followed at least one of these Pages, 2,300 accounts joined at least one of these groups, and more than 48,000 accounts followed at least one of these Instagram accounts.
  • Advertising:  More than $6,000 in spending for ads on Facebook and Instagram, paid for in US and Australian dollars. The first ad was run in Jan 2015, and the last was run in August 2018. Some ads have been blocked since the launch of our political ads transparency tools launched. We have not completed our review of the organic content coming from these accounts.
  • Events:  3 events hosted.
  • Content:  A sample of English-language posts is included below.

 

The second part of our investigation found links between “Liberty Front Press” and another set of accounts and Pages, the first of which was created in 2016. They typically posed as news organizations and didn’t reveal their true identity. They also engaged in traditional cybersecurity attacks, including attempts to hack people’s accounts and spread malware, which we had seen before and disrupted.

  • Presence on Facebook and Instagram: 12 Pages and 66 accounts on Facebook, as well as 9 accounts on Instagram.
  • Followers:  About 15,000 accounts followed at least one of these Pages and more than 1,100 followed at least one of these Instagram accounts.
  • Advertising:  We have found no advertising associated with these accounts or Pages. We have not completed our review of the organic content from these accounts.
  • Events:  We have found no events associated with these accounts or Pages.
  • Content:  A sample of Arabic-language posts is included below.

The third part of our investigation uncovered another set of accounts and Pages, the first of which was created in 2011, that largely shared content about Middle East politics in Arabic and Farsi. They also shared content about politics in the UK and US in English. We first discovered this set in August 2017 and expanded our investigation in July 2018 as we stepped up our efforts ahead of the US midterm elections.

  • Presence on Facebook and Instagram:  168 Pages and 140 accounts on Facebook, as well as 31 accounts on Instagram.
  • Followers:  About 813,000 accounts followed at least one of these Pages and more than 10,000 followed at least one of these Instagram accounts.
  • Advertising:  More than $6,000 in spending for ads on Facebook and Instagram, paid for in US dollars, Turkish lira, and Indian rupees. The first ad was run in July 2012, and the last was run in April 2018. We have not completed our review of the organic content coming from these accounts.
  • Events:  25 events hosted.
  • Content:  A sample of English-language posts is included below.

We’re still investigating, and we have shared what we know with the US and UK governments. Since there are US sanctions involving Iran, we’ve also briefed the US Treasury and State Departments. These sanctions allow companies to provide people internet services for personal communications, including the government and its affiliates. But Facebook takes steps to prevent people in Iran and other sanctioned countries from using our ad tools. For example, our systems screen every advertiser to identify their current location and whether they’re named on the US government’s list of sanctioned individuals. Based on what we learn in this investigation and from government officials, we’ll make changes to better detect people who try to evade our sanctions compliance tools and prevent them from advertising.

Finally, we’ve removed Pages, groups and accounts that can be linked to sources the US government has previously identified as Russian military intelligence services. This is unrelated to the activities we found in Iran. While these are some of the same bad actors we removed for cybersecurity attacks before the 2016 US election, this more recent activity focused on politics in Syria and Ukraine. For example, they are associated with Inside Syria Media Center, which the Atlantic Council and other organizations have identified for covertly spreading pro-Russian and pro-Assad content. To date, we have not found activity by these accounts targeting the US.

We’re working closely with US law enforcement on this investigation, and we appreciate their help. These investigations are ongoing – and given the sensitivity we aren’t sharing more information about what we removed.

 

Update on August 21, 2018 at 7:55PM PT

When to Take Action Against Cyber Threats

By Chad Greene, Director of Security

As soon as a cyber threat is discovered, security teams face a difficult decision: when to take action. Do we immediately shut down a campaign in order to prevent harm? Or do we spend time investigating the extent of the attack and who’s behind it so we can prevent them from doing bad things again in the future?

These questions have been debated by security experts for years. And it’s a trade-off that our team at Facebook has grappled with over the past year as we’ve identified different cyber threats — including the coordinated inauthentic behavior we took down today. There are countless things we consider in each case. How active is the threat? How sophisticated are the actors? How much harm is being done? And how will the threat play into world events? Here is a summary of what we have learned over the years – in many cases lessons that we have had to learn the hard way.

Who We Share Information With — and When

Cyber threats don’t happen in a vacuum. Nor should investigations. Really understanding the nature of a threat requires understanding how the actors communicate, how they acquire things like hosting and domain registration, and how the threat manifests across other services. To help gather this information, we often share intelligence with other companies once we have a basic grasp of what’s happening. This also lets them better protect their own users.

Academic researchers are also invaluable partners. This is because third-party experts, both individuals and organizations, often have a unique perspective and additional information that can help us. They also play an important role when it comes to raising the public’s awareness about these problems and how people can better protect themselves.

Law enforcement is crucial, too. There are cases where law enforcement can play a specific role in helping us mitigate a threat that we’ve identified, and in those instances, we’ll reach out to the appropriate agency to share what we know and seek their help. In doing this, our top priority is always to minimize harm to the people that use our services.

When we decide to take down a threat — a decision I’ll go into more below — we also need to consider our options for alerting the people who may have been affected. For example, in cases of targeted malware and hacking attempts that we know are being done by a sophisticated bad actor, like a nation state, we may put a notice at the top of people’s News Feed to alert them and make sure their account is safe. In the case of an attack that seeks to cause broader societal harm – like using misinformation to manipulate people or create division – where possible we share what we know with the press and third-party researchers so the public is aware of the issue.

When We’d Wait — And When We’d Act

When we identify a campaign, our aim is to learn as much as we can about: the extent of the bad actors’ presence on our services; their actions; and what we can do to deter them. When we reach a point where our analysis is turning up little new information, we’ll take down a campaign, knowing that more time is unlikely to bring us more answers. This was the case with the campaign that we took down today which was linked to Russian military intelligence services.

But if we’re still learning as we dig deeper, we’ll likely hold off on taking any action that might tip off our adversary and prompt them to change course. After all, the more we know about a threat, the better we’ll be at stopping the same actors from striking again in the future.

This is particularly true for highly sophisticated actors who are adept at covering their tracks. We want to understand their tactics and respond in a way that keeps them off Facebook for good. Amateur actors, on the other hand, can be taken down quickly with relative confidence that we’d be able to find them if they crop up elsewhere — even with limited information on who they are or how they operate.

Often, though, we have to take action before we’ve exhausted our investigation. For example, we’ll always move quickly against a threat when there’s an immediate risk to safety. So if we determine that someone is trying to compromise another person’s account in order to determine their location — and we suspect the target might be in physical danger — we’d take action immediately, as well as notify the person being targeted and law enforcement when appropriate.

These considerations don’t stop at physical harm. We also look at how a threat might impact upcoming world events. This sometimes means that we speed up taking something down because an event is approaching. This was the case when we removed 32 Pages and accounts last month. In other cases, this may mean delaying action before an upcoming event to reduce the chances that a bad actor will have time to regroup and cause harm.

Our Best Bet

Security experts can never be one hundred percent confident in their timing. But what we can do is closely consider the many moving pieces, weigh the benefits and risks of various scenarios, and make a decision that we think will be best for people on our services and society at large.

 

Back to Top
What We’ve Found So Far
When to Take Action Against Cyber Threats
Sample Content
Press Call Transcript

Update on Myanmar

By Sara Su, Product Manager

We have a responsibility to fight abuse on Facebook. This is especially true in countries like Myanmar where many people are using the internet for the first time and social media can be used to spread hate and fuel tension on the ground.

The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate on Facebook. It’s why we created a dedicated team across product, engineering and policy to work on issues specific to Myanmar earlier this year. Today we’re sharing details on the investments we have made and the results they have started to yield.

Better Tools and Technology

The rate at which bad content is reported in Burmese, whether it’s hate speech or misinformation, is low. This is due to challenges with our reporting tools, technical issues with font display and a lack of familiarity with our policies. So we’re investing heavily in artificial intelligence that can proactively flag posts that break our rules.

In the second quarter of 2018, we proactively identified about 52% of the content we removed for hate speech in Myanmar. This is up from 13% in the last quarter of 2017, and is the result of the investments we’ve made both in detection technology and people, the combination of which help find potentially violating content and accounts and flag them for review. As recently as last week, we proactively identified posts that indicated a threat of credible violence in Myanmar. We removed the posts and flagged them to civil society groups to ensure that they were aware of potential violence.

We’re also working to make it easier for people to report content in the first place. One of the biggest problems we face is the way text is displayed in Myanmar. Unicode is the global industry standard to encode and display fonts, including for Burmese and other local Myanmar languages. However, over 90% of phones in Myanmar use Zawgyi, which is only used to display Burmese. This means that someone with a Zawgyi phone can’t read websites, posts or Facebook Help Center instructions written in Unicode properly. Myanmar is switching to Unicode, and we’re helping by removing Zawgyi as an option for new Facebook users and improving font converters for existing ones. This will not affect people’s posts but it will standardize how they see buttons, Help Center instructions and reporting tools in the Facebook app.

Our teams are always looking for ways to make reporting easier and more intuitive. In addition to improving our Facebook reporting tools, we’ve introduced new tools on the Messenger mobile app for people to report conversations that might violate our Community Standards.

When we do receive reports of hate speech, they’re sent to our content review team, which has had Myanmar language reviewers for several years. As of this June, we had over 60 Myanmar language experts reviewing content and we will have at least 100 by the end of this year. But it’s not enough to add more reviewers because we can’t rely on reports alone to catch bad content. Engineers across the company are building artificial intelligence tools that help us identify abusive posts and experts from our policy and partnerships teams are working with civil society and building digital literacy programs for people in Myanmar.

Evolving and Enforcing our Policies

It has also become clear that in Myanmar, false news can be used to incite violence, especially when coupled with ethnic and religious tensions. We have updated our credible violence policies to account for this, removing misinformation that has the potential to contribute to imminent violence or physical harm.

We’re working with a network of independent organizations to identify these posts – and we’ve already taken down content in Myanmar that they’ve flagged. This new policy will be global, but we are initially focusing our work on countries where false news has had life or death consequences. These include Sri Lanka, India, Cameroon, and the Central African Republic as well as Myanmar.

While we’re adapting our approach to false news given the changing circumstances, our rules on hate speech have stayed the same: it’s not allowed. And we are getting much more proactive in designating Myanmar hate figures and organizations on Facebook, including Wirathu, Thuseitta, Parmaukkha, Ma Ba Tha and the Buddha Dhamma Prahita Foundation. These individuals and groups are now banned from Facebook — they aren’t allowed to have a presence on Facebook, and no one else is allowed to support, praise or represent them.

Partnerships and Programs on the Ground

We continue to learn from civil society, which has a strong grasp of these issues and helps us understand how our policies play out on the ground. With their help, we have held education campaigns in Myanmar for three years, including a locally translated and illustrated version of our Community Standards. More recently we introduced locally designed tips on how to spot false news, and we’re working to strengthen individual account security in the country where security is typically weak.

As part of our membership in the Global Network Initiative, we routinely conduct impact assessments of product and policy decisions across our apps. Local organizations have asked that we conduct a human rights impact assessment in Myanmar. We have hired Business for Social Responsibility, a non-profit that has expertise in human rights, to do this work and we’ll share the results once we have them.

This is some of the most important work being done at Facebook. And we know we can’t do it alone — we need help from civil society, other technology companies, journalists, schools, government, and most important of all members of our community.

The weight of this work, and its impact on the people of Myanmar, is felt across the company.

An Update on Facebook App Review

By Ime Archibong, VP of Product Partnerships

Back in May, we announced that all apps using the Facebook Platform APIs would need to go through a more comprehensive review to better protect people’s Facebook information — with an August 1 deadline to submit for review for all existing apps. As a result, we are cutting off API access for hundreds of thousands of inactive apps that have not submitted for our app review process.

We’d encourage apps that are still being used but have not been submitted for app review to do so now. However, to ensure all apps currently in use go through our review process, we will be proactively queueing up apps for review. Where we need more information, developers will have a limited amount of time to respond. If we don’t hear back within that timeframe, we will remove the app’s access to APIs that require approval. Developers will not lose their API access while their app is in the queue or while we are reviewing it — so long as they comply with our Platform Policies.

Our goal with all these changes is to ensure that we better protect people’s Facebook information while also enabling developers to build great social experiences – like managing a group, planning a trip or getting concert tickets for your favorite band.

Removing Bad Actors on Facebook

Today we removed 32 Pages and accounts from Facebook and Instagram because they were involved in coordinated inauthentic behavior. This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.

We’re still in the very early stages of our investigation and don’t have all the facts — including who may be behind this. But we are sharing what we know today given the connection between these bad actors and protests that are planned in Washington next week. We will update this post with more details when we have them, or if the facts we have change.

It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past. We believe this could be partly due to changes we’ve made over the last year to make this kind of abuse much harder. But security is not something that’s ever done. We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It’s an arms race and we need to constantly improve too. It’s why we’re investing heavily in more people and better technology to prevent bad actors misusing Facebook — as well as working much more closely with law enforcement and other tech companies to better understand the threats we face.

What We’ve Found So Far
How Much Can Companies Know About Who’s Behind Cyber Threats?
Sample Content
Press Call Transcript

 

July 31, 2018

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

About two weeks ago we identified the first of eight Pages and 17 profiles on Facebook, as well as seven Instagram accounts, that violate our ban on coordinated inauthentic behavior. We removed all of them this morning once we’d completed our initial investigation and shared the information with US law enforcement agencies, Congress, other technology companies, and the Atlantic Council’s Digital Forensic Research Lab, a research organization that helps us identify and analyze abuse on Facebook.

  • In total, more than 290,000 accounts followed at least one of these Pages, the earliest of which was created in March 2017. The latest was created in May 2018.
  • The most followed Facebook Pages were “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters.” The remaining Pages had between zero and 10 followers, and the Instagram accounts had zero followers.
  • There were more than 9,500 organic posts created by these accounts on Facebook, and one piece of content on Instagram.
  • They ran about 150 ads for approximately $11,000 on Facebook and Instagram, paid for in US and Canadian dollars. The first ad was created in April 2017, and the last was created in June 2018.
  • The Pages created about 30 events since May 2017. About half had fewer than 100 accounts interested in attending. The largest had approximately 4,700 accounts interested in attending, and 1,400 users said that they would attend.

We are still reviewing all of the content and ads from these Pages. In the meantime here are some examples of the content and ads posted by these Pages.

These bad actors have been more careful to cover their tracks, in part due to the actions we’ve taken to prevent abuse over the past year. For example they used VPNs and internet phone services, and paid third parties to run ads on their behalf. As we’ve told law enforcement and Congress, we still don’t have firm evidence to say with certainty who’s behind this effort. Some of the activity is consistent with what we saw from the IRA before and after the 2016 elections. And we’ve found evidence of some connections between these accounts and IRA accounts we disabled last year, which is covered below. But there are differences, too. For example, while IP addresses are easy to spoof, the IRA accounts we disabled last year sometimes used Russian IP addresses. We haven’t seen those here.

We found this activity as part of our ongoing efforts to identify coordinated inauthentic behavior. Given these bad actors are now working harder to obscure their identities, we need to find every small mistake they make. It’s why we’re following up on thousands of leads, including information from law enforcement and lessons we learned from last year’s IRA investigation. The IRA engaged with many legitimate Pages, so these leads sometimes turn up nothing. However, one of these leads did turn up something. One of the IRA accounts we disabled in 2017 shared a Facebook Event hosted by the “Resisters” Page. This Page also previously had an IRA account as one of its admins for only seven minutes. These discoveries helped us uncover the other inauthentic accounts we disabled today.

The “Resisters” Page also created a Facebook Event for a protest on August 10 to 12 and enlisted support from real people. The Event – “No Unite the Right 2 – DC” – was scheduled to protest an August “Unite the Right” event in Washington. Inauthentic admins of the “Resisters” Page connected with admins from five legitimate Pages to co-host the event. These legitimate Pages unwittingly helped build interest in “No Unite Right 2 – DC” and posted information about transportation, materials, and locations so people could get to the protests.

We disabled the event earlier today and have reached out to the admins of the five other Pages to update them on what happened. This afternoon, we’ll begin informing the approximately 2,600 users interested in the event, and the more than 600 users who said they’d attend, about what happened.

We don’t have all the facts, but we’ll work closely with others as we continue our investigation. We hope to get new information from law enforcement and other companies so we can better understand what happened — and we’ll share any additional findings with law enforcement and Congress. However, we may never be able to identify the source with the same level of confidence we had in naming the IRA last year. See Alex Stamos’ post below on why attribution can be really hard.

We’re seeing real benefits from working with outside experts. Partners like the Atlantic Council have provided invaluable help in identifying bad actors and analyzing their behavior across the internet. Based on leads from the recent US Department of Justice indictment, the Atlantic Council identified a Facebook group with roughly 4,000 members. It was created by Russian government actors but had been dormant since we disabled the group’s admins last year. Groups typically persist on Facebook even when their admins are disabled, but we chose to remove this group to protect the privacy of its members in advance of a report that the Atlantic Council plans to publish as soon as it concludes its analysis. It will follow this report in the coming weeks with an analysis of the Pages, accounts and profiles we disabled today.

 

July 31, 2018

How Much Can Companies Know About Who’s Behind Cyber Threats?

By Alex Stamos, Chief Security Officer

Deciding when and how to publicly link suspicious activity to a specific organization, government, or individual is a challenge that governments and many companies face. Last year, we said the Russia-based Internet Research Agency (IRA) was behind much of the abuse we found around the 2016 election. But today we’re shutting down 32 Pages and accounts engaged in coordinated inauthentic behavior without saying that a specific group or country is responsible.

The process of attributing observed activity to particular threat actors has been much debated by academics and within the intelligence community. All modern intelligence agencies use their own internal guidelines to help them consistently communicate their findings to policymakers and the public. Companies, by comparison, operate with relatively limited information from outside sources — though as we get more involved in detecting and investigating this kind of misuse, we also need clear and consistent ways to confront and communicate these issues head on.

Determining Who is Behind an Action

The first challenge is figuring out the type of entity to which we are attributing responsibility. This is harder than it might sound. It is standard for both traditional security attacks and information operations to be conducted using commercial infrastructure or computers belonging to innocent people that have been compromised. As a result, simple techniques like blaming the owner of an IP address that was used to register a malicious account usually aren’t sufficient to accurately determine who’s responsible.

Instead, we try to:

  • Link suspicious activity to the individual or group with primary operational responsibility for the malicious action. We can then potentially associate multiple campaigns to one set of actors, study how they abuse our systems, and take appropriate countermeasures.
  • Tie a specific actor to a real-world sponsor. This could include a political organization, a nation-state, or a non-political entity.

The relationship between malicious actors and real-world sponsors can be difficult to determine in practice, especially for activity sponsored by nation-states. In his seminal paper on the topic, Jason Healey described a spectrum to measure the degree of state responsibility for cyber attacks. This included 10 discrete steps ranging from “state-prohibited,” where a state actively stops attacks originating from their territory, to “state-integrated,” where the attackers serve as fully integrated resources of the national government.

This framework is helpful when looking at the two major organized attempts to interfere in the 2016 US election on Facebook that we have found to date. One set of actors used hacking techniques to steal information from email accounts — and then contacted journalists using social media to encourage them to publish stories about the stolen data. Based on our investigation and information provided by the US government, we concluded that this work was the responsibility of groups tied to the GRU, or Russian military intelligence. The recent Special Counsel indictment of GRU officers supports our assessment in this case, and we would consider these actions to be “state-integrated” on Healey’s spectrum.

The other major organized effort did not include traditional cyber attacks but was instead designed to sow division using social media. Based on our own investigations, we assessed with high confidence that this group was part of the IRA. There has been a public debate about the relationship between the IRA and the Russian government — though most seem to conclude this activity is between “state-encouraged” and “state-ordered” using Healey’s definitions.

Four Methods of Attribution

Academics have written about a variety of methods for attributing activity to cyber actors, but for our purposes we simplify these methods into an attribution model with four general categories. And while all of these are appropriate for government organizations, we do not believe some of them should be used by companies:

  • Political Motivations: In this model, inferred political motivations are measured against the known political goals of a nation-state. Providing public attribution based on political evidence is especially challenging for companies because we don’t have the information needed to make this kind of evaluation. For example, we lack the analytical capabilities, signals intelligence, and human sources available to the intelligence community. As a result, we don’t believe it is appropriate for Facebook to give public comment on the political motivations of nation-states.
  • Coordination: Sometimes we will observe signs of coordination between threat actors even when the evidence indicates that they are operating separate technical infrastructure. We have to be careful, though, because coincidences can happen. Collaboration that requires sharing of secrets, such as the possession of stolen data before it has been publicly disclosed, should be treated as much stronger evidence than open interactions in public forums.
  • Tools, Techniques and Procedures (TTPs): By looking at how a threat group performs their actions to achieve a goal — including reconnaissance, planning, exploitation, command and control, and exfiltration or distribution of information — it is often possible to infer a linkage between a specific incident and a known threat actor. We believe there is value in providing our assessment of how TTPs compare with previous events, but we don’t plan to rely solely upon TTPs to provide any direct attribution.
  • Technical Forensics: By studying the specific indicators of compromise (IOCs) left behind in an incident, it’s sometimes possible to trace activity back to a known or new organized actor. Sometimes these IOCs point to a specific group using shared software or infrastructure, or to a specific geographic location. In situations where we have high confidence in our technical forensics, we provide our best attribution publicly and report the specific information to the appropriate government authorities. This is especially true when these forensics are compatible with independently gathered information from one of our private or public partners.

Applying the Framework to Our New Discovery

Here is how we use this framework to discuss attribution of the accounts and Pages we removed today:

  • As mentioned, we will not provide an assessment of the political motivations of the group behind this activity.
  • We have found evidence of connections between these accounts and previously identified IRA accounts. For example, in one instance a known IRA account was an administrator on a Facebook Page controlled by this group. These are important details, but on their own insufficient to support a firm determination, as we have also seen examples of authentic political groups interacting with IRA content in the past.
  • Some of the tools, techniques and procedures of this actor are consistent with those we saw from the IRA in 2016 and 2017. But we don’t believe this evidence is strong enough to provide public attribution to the IRA. The TTPs of the IRA have been widely discussed and disseminated, including by Facebook, and it’s possible that a separate actor could be copying their techniques.
  • Our technical forensics are insufficient to provide high confidence attribution at this time. We have proactively reported our technical findings to US law enforcement because they have much more information than we do, and may in time be in a position to provide public attribution.

Given all this, we are not going to attribute this activity to any one group right now. This set of actors has better operational security and does more to conceal their identities than the IRA did around the 2016 election, which is to be expected. We were able to tie previous abuse to the IRA partly because of several unique aspects of their behavior that allowed us to connect a large number of seemingly unrelated accounts. After we named the IRA, we expected the organization to evolve. The set of actors we see now might be the IRA with improved capabilities, or it could be a separate group. This is one of the fundamental limitations of attribution: offensive organizations improve their techniques once they have been uncovered, and it is wishful thinking to believe that we will always be able to identify persistent actors with high confidence.

The lack of firm attribution in this case or others does not suggest a lack of action. We have invested heavily in people and technology to detect inauthentic attempts to influence political discourse, and enforcing our policies doesn’t require us to confidently attribute the identity of those who violate them or their potential links to foreign actors. We recognize the importance of sharing our best assessment of attribution with the public, and despite the challenges we intend to continue our work to find and stop this behavior, and to publish our results responsibly.

Back to Top
What We’ve Found So Far
How Much Can Companies Know About Who’s Behind Cyber Threats?
Sample Content
Press Call Transcript

Protecting Our Community in Brazil

By Nathaniel Gleicher, Head of Cybersecurity Policy

Facebook gives millions of people in Brazil a voice online, and we want to make sure their conversations happen in an authentic and safe environment. That’s why we require that people use their real identities on Facebook.

As part of our ongoing efforts to prevent abuse — and after a rigorous investigation — we recently took down a network of 196 Pages and 87 accounts in Brazil that violated our authenticity policies. These were part of a coordinated network that hid behind fake Facebook accounts and misled people to sow division and spread misinformation. The accounts and Pages we took down were in direct violation of our policies. We’re always monitoring for inauthentic accounts, as well as other types of abuse, and will remove any account or Page that breaks our rules.

We don’t want this kind of behavior on Facebook — and we’re investing heavily in both people and technology to keep bad content off our services. We now have about 15,000 people working on security and content review across the world, and we’ll expand those teams to more than 20,000 by the end of this year. We use reports from our community and technology like machine learning and artificial intelligence to detect bad behavior and take action more quickly.

Working Together to Give People More Control of Their Data

By Steve Satterfield, Privacy & Public Policy Director

This year, we’ve made our privacy settings easier to find and improved your ability to manage your data, including through Download Your Information, which is a way to access a secure copy of the data you’ve shared with Facebook. But using your data from one service when you sign up for another still isn’t as easy it should be. Today we’re excited to announce that we’re participating in the Data Transfer Project, a collaboration of organizations, including Google, Microsoft and Twitter, committed to building a common way for people to transfer data into and out of online services.

Moving your data between any two services can be complicated because every service is built differently and uses different types of data that may require unique privacy controls and settings. For example, you might use an app where you share photos publicly, a social networking app where you share updates with friends, and a fitness app for tracking your workouts. People increasingly want to be able to move their data among different kinds of services like these, but they expect that the companies that help them do that will also protect their data.

These are the kinds of issues the Data Transfer Project will tackle. The Project is in its early stages, and we hope more organizations and experts will get involved. More information is available in the project’s white paper and at https://datatransferproject.dev/.

Facebook AI Research Expands With New Academic Collaborations

By Yann LeCun, Chief AI Scientist

We created Facebook AI Research over four years ago to focus on advancing the science and technology of AI, and we’ve always done this by collaborating with local academic communities. FAIR relies on open partnerships to help drive AI forward, where researchers have the freedom to control their own agenda. Ours frequently collaborate with academics from other institutions, and we often provide financial and hardware resources to specific universities. It’s through working together and openly publishing research that we’ll make progress. Today, we’re announcing new additions to FAIR who are helping us build new AI-specific labs and strengthen existing offices:

  • Pittsburgh: Jessica Hodgins will lead a new FAIR lab in Pittsburgh, which will focus on robotics, lifelong learning systems that learn continuously over years, teaching machines to reason, and AI in support of creativity. Jessica’s research focuses on computer graphics, animation, and robotics with an emphasis on generating and analyzing human motion. Her expertise will also benefit the Facebook Reality Lab already in Pittsburgh. She is joined by Abhinav Gupta, who will focus on large-scale visual and robot learning, self-supervised learning, and reasoning. Both Jessica, professor of robotics and computer science, and Abhinav, associate professor of robotics, will retain their Carnegie Mellon University positions part-time.
  • Seattle: Luke Zettlemoyer recently joined our Seattle office, where we have AI Research and Computational Photography teams. He brings expertise in natural language processing to Facebook while retaining his associate professor position in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. His research will focus on computational semantics, including deep learning methods for multilingual language understanding.
  • London: Andrea Vedaldi will join our London office, where he will focus on computer vision and machine learning. He will retain his associate professor of engineering science position with the University of Oxford, where he also co-leads the Visual Geometry Group. Within FAIR he will be researching image understanding, specifically on unsupervised learning through large and diverse visual datasets and by understanding geometric 3D reasoning. He will also continue to teach and supervise research students at the University of Oxford, several of whom will be supported by Facebook through PhD scholarships. Also joining us in London is the team behind Bloomsbury AI, which we announced earlier this month. They have a strong background in natural language processing and will use that expertise to continue pursuing research in text understanding and reasoning systems.
  • Menlo Park: Jitendra Malik, one of the most influential researchers in computer vision, recently joined from UC Berkeley to lead FAIR out of Menlo Park. He has been influential in shaping Berkeley’s AI group into the exceptional lab that it is today, and we look forward to his help in continuing the growth of FAIR. He will retain part-time affiliation with UC Berkeley to advise students, and the Berkeley AI Research Lab is one of several receiving funding from FAIR.

This dual affiliation model is common across FAIR, with many of our researchers around the world splitting their time between FAIR and a university. Rob Fergus and I do this with NYU, Joelle Pineau with McGill, Devi Parikh and Dhruv Batra with Georgia Tech, Pascal Vincent with Université de Montréal, Iasonas Kokkinos with University College London, and Lior Wolf with Tel Aviv University. This model allows people within FAIR to continue teaching classes and advising graduate students and postdoctoral researchers, while publishing papers regularly. This co-employment appointment concept is similar to how many professors in medicine, law, and business operate.

Part of our commitment to academia and local ecosystems is also investing in them and providing tools they need to thrive. As we’ve done in the past, we plan to support a number of PhD students who will conduct research in collaboration with researchers at FAIR and their university faculty, or on topics of interest to FAIR under the direction of their faculty. We’re also providing millions in funding to the schools from which we’ve hired. This allows the professors to spend less time fundraising for their labs and more time working with their students.

For students, association with FAIR can provide collaboration opportunities with researchers who have a broad set of expertise and the computational resources to pursue large-scale learning research. It also provides a platform for students to showcase their research and ground it in real-world problems at scale. Beyond collaborations, we also offer fellowships and emerging scholars programs to support promising doctoral students. We’re constantly evaluating what opportunities we can offer students, and most recently increased the number of PhD fellows with FAIR Paris’ CIFRE program from 15 students to 40, granted new scholarships to students, and funded 10 servers for French public institutions. We will assess how to bring similar investments to other FAIR offices around the world.

We’re excited to continue investing in academia, educating the next generation of researchers and engineers, and strengthening interaction across AI disciplines that can traditionally become siloed. Thank you to all the academics around the world who are collaborating with FAIR to advance AI.

 

 

 

Working to Keep Facebook Safe

Content reviewers in Essen, Germany

By Monika Bickert, Vice President of Global Policy Management

Update on July 17, 2018:

After watching the program, we want to give you some more facts about a couple of important issues raised by Channel 4 News.

Cross Check
We want to make clear that we remove content from Facebook, no matter who posts it, when it violates our standards. There are no special protections for any group — whether on the right or the left. ‘Cross Check’ — the system described in Dispatches — simply means that some content from certain Pages or Profiles is given a second layer of review to make sure we’ve applied our policies correctly.

This typically applies to high profile, regularly visited Pages or pieces of content on Facebook so that they are not mistakenly removed or left up. Many media organizations’ Pages — from Channel 4 to The BBC and The Verge — are cross checked. We may also Cross Check reports on content posted by celebrities, governments, or Pages where we have made mistakes in the past. For example, we have Cross Checked an American civil rights activist’s account to avoid mistakenly deleting instances of him raising awareness of hate speech he was encountering.

To be clear, Cross Checking something on Facebook does not protect the profile, Page or content from being removed. It is simply done to make sure our decision is correct.

Britain First was a cross checked Page. But the notion this in anyway protected the content is wrong. In fact, we removed Britain First from Facebook in March because their Pages repeatedly violated our Community Standards.

Minors
We do not allow people under 13 to have a Facebook account. If someone is is reported to us as being under 13, the reviewer will look at the content on their profile (text and photos) to try to ascertain their age. If they believe the person is under 13, the account will be put on a hold and the person will not be able to use Facebook until they provide proof of their age. Since the program, we have been working to update the guidance for reviewers to put a hold on any account they encounter if they have a strong indication it is underage, even if the report was for something else.

Originally published on July 16, 2018:

People all around the world use Facebook to connect with friends and family and openly discuss different ideas. But they will only share when they are safe. That’s why we have clear rules about what’s acceptable on Facebook and established processes for applying them. We are working hard on both, but we don’t always get it right.

This week a TV report on Channel 4 in the UK has raised important questions about those policies and processes, including guidance given during training sessions in Dublin. It’s clear that some of what is in the program does not reflect Facebook’s policies or values and falls short of the high standards we expect.

We take these mistakes incredibly seriously and are grateful to the journalists who brought them to our attention. We have been investigating exactly what happened so we can prevent these issues from happening again. For example, we immediately required all trainers in Dublin to do a re-training session — and are preparing to do the same globally. We also reviewed the policy questions and enforcement actions that the reporter raised and fixed the mistakes we found.

We provided all this information to the Channel 4 team and included where we disagree with their analysis. Our Vice President for Global Policy Solutions, Richard Allan, also answered their questions in an on-camera interview. Our written response and a transcript of the interview can be found in full here and here.

It has been suggested that turning a blind eye to bad content is in our commercial interests. This is not true. Creating a safe environment where people from all over the world can share and connect is core to Facebook’s long-term success. If our services aren’t safe, people won’t share and over time would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content.

How We Create and Enforce Our Policies

More than 1.4 billion people use Facebook every day from all around the world. They post in dozens of different languages: everything from photos and status updates to live videos. Deciding what stays up and what comes down involves hard judgment calls on complex issues — from bullying and hate speech to terrorism and war crimes. It’s why we developed our Community Standards with input from outside experts — including academics, NGOs and lawyers from around the world. We hosted three Facebook Forums in Europe in May, where we were able to hear from human rights and free speech advocates, as well as counter-terrorism and child safety experts.

These Community Standards have been publicly available for many years, and this year, for the first time, we published the more detailed internal guidelines used by our review teams to enforce them.

To help us manage and review content, we work with several companies across the globe including CPL, the company featured in the program. These teams review reports 24 hours a day, seven days a week, across all time zones and in dozens of languages. When needed, they escalate decisions to Facebook staff with deep subject matter and country expertise. For specific, highly problematic types of content such as child abuse, the final decisions are made by Facebook employees.

Reviewing reports quickly and accurately is essential to keeping people safe on Facebook. This is why we’re doubling the number of people working on our safety and security teams this year to 20,000. This includes over 7,500 content reviewers. We’re also investing heavily in new technology to help deal with problematic content on Facebook more effectively. For example, we now use technology to assist in sending reports to reviewers with the right expertise, to cut out duplicate reports, and to help detect and remove terrorist propaganda and child sexual abuse images before they’ve even been reported.

We are constantly improving our Community Standards and we’ve invested significantly in being able to enforce them effectively. This is a complex task, and we have more work to do. But we are committed to getting it right so Facebook is a safe place for people and their friends.

Facebook 2018 Diversity Report: Reflecting on Our Journey

By Maxine Williams, Chief Diversity Officer

Today we’re publishing our fifth annual diversity report — and we wanted to take this opportunity to share what we believe is working and where we can do better.

Diversity is critical to our success as a company. People from all backgrounds rely on Facebook to connect with others, and we will better serve their needs with a more diverse workforce. Since 2014, we’ve made some progress increasing the number of people from traditionally underrepresented groups employed at Facebook.

The percentage of women globally at Facebook has increased from 31% in 2014 to 36% today.

  • Women in technical roles have increased from 15% to 22%.
  • Women in business and sales roles grew from 47% to 57%.
  • Women in senior leadership expanded from 23% to 30%.

The number of women at Facebook has increased 5X over the last five years. The number of women in technical roles has increased over 7X. We have also nearly doubled the number of women graduates we hire in software engineering from 16% to 30%. This is despite the fact that the number of women undergraduates in the U.S. doing computer science has remained flat at 18%.

We’ve also increased the proportion of Asian, Black and Hispanic employees across the company.

  • Black and Hispanic employees overall increased from 2% to 4%, and 4% to 5% respectively.
  • The percentage of Black employees in business and sales roles grew from 2% to 8% and Hispanic employees from 6% to 8%.

But we continue to have challenges recruiting Black and Hispanic employees in technical roles and senior leadership.

  • The percentage of Black employees in technical roles remained flat, as did the percentage of Black employees in leadership roles, at 1% and 2% respectively
  • The percentage of Hispanic employees in technical roles remained flat at 3% and dropped in leadership roles from 4% to 3%

What’s Worked

We know that recruitment and retention are key. It’s why we’ve worked to build deep relationships with organizations that support people of color and women, including Anita Borg/Grace Hopper, SHPE and NSBE, as well as many others.

Implementing — and then expanding — the diverse slate approach has also had a positive effect. This ensures that recruiters present qualified candidates from underrepresented groups to hiring managers looking to fill open roles. As a result, it makes all of us accountable for identifying more diverse candidates during the interview process. We’ve seen steady increases in hiring rates for underrepresented people since we started testing this approach in 2015.

We’ve worked hard at retention as well by creating an inclusive environment where people from all backgrounds can thrive and succeed. This includes our many Facebook Resource Groups, which help build community and support professional development — as well as the investments we have made to tackle bias and create an inclusive culture. Programs like Managing Bias, Managing Inclusion and Be the Ally have been very well received internally.

What We Can Do Better

We’ve learned through trial and error that if we’re going to hire more people from a broader range of backgrounds, it’s not enough to simply show up at colleges and universities. We need to create practical training opportunities for these students to build on their academic experience. Programs like Crush Your Coding Interview, the Facebook University Training Program and Engineer in Residence at historically Black and Hispanic colleges and universities have all helped us recruit more women and students of color. It’s why we are expanding these programs — and adding new ones. For example, we recently signed a partnership with CodePath.org which will help them reach 2,000 more computer science students at over 20 universities. These include community colleges and universities that have traditionally attracted students of color. Over the next year, we will partner with the UNCF to design courses for their HBCU CS Summer Academy. We will also co-host the HBCU CS Faculty Institute in partnership with UNCF’s Career Pathways Initiative to offer faculty professional development.

Of course, diversity isn’t only about gender, race and ethnicity. We know that the framing and categories used to report gender are not inclusive of our non-binary employees. However, we are limited by government reporting requirements in many of the countries where we operate. We’re pleased to report the percentage of US employees who self-identify as LGBQA+ or Trans+ for the third year in a row. That number has moved from 7% to 8% over the past year. HRC has again recognized us as one of the best places to work for LGBTQ equality with a 100% rating on their Corporate Equality Index. We’re also honored to be awarded a 100% rating on the USBLN Disability Index for 2018 — and proud that veterans now make up 2% of our employees.

Once again, we are happy to share that men and women at Facebook get equal pay for equal work. We review our total compensation data every year — which includes base salary, bonus and equity — and have had 100% pay equity for women for many years, not just in the U.S. but globally.

We’ve done this report since 2014 and have been working consistently to improve diversity at Facebook so that we can make better decisions and build better products for the communities that use our services. A critical lesson we’ve learned is that recruiting, retaining and developing a diverse, inclusive workforce should be a priority from day one. The later you start taking deliberate action to increase diversity, the harder it becomes. We are encouraged by the progress we’ve made in some areas, and grateful for the advice and support we’ve had along the way. But we have so much more still to do across the board. For more information please read our annual diversity report.

An Update on Our App Investigation and Audit

By Ime Archibong, VP of Product Partnerships

Here is an update on the app investigation and audit that Mark Zuckerberg promised on March 21.

As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014 — significantly reducing the data apps could access. He also made clear that where we had concerns about individual apps we would audit them — and any app that either refused or failed an audit would be banned from Facebook.

The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests for information (RFI) — which ask a series of detailed questions about the app and the data it has access to — and perform audits that may include on-site inspections.

We have large teams of internal and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended — pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015 — just as we did for Cambridge Analytica.

There is a lot more work to be done to find all the apps that may have misused people’s Facebook data – and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We will keep you updated on our progress.

Russian Ads Released by Congress

Today, House Intelligence Committee Ranking Member Adam Schiff published the 3,000 ads the Russia-based Internet Research Agency ran on Facebook and Instagram between 2015 and 2017. We gave these ads to Congress so they could better understand the extent of Russian interference in the last US presidential election.

In the run-up to the 2016 elections, we were focused on the kinds of cybersecurity attacks typically used by nation states, for example phishing and malware attacks. And we were too slow to spot this type of information operations interference. Since then, we’ve made important changes to prevent bad actors from using misinformation to undermine the democratic process.

This will never be a solved problem because we’re up against determined, creative and well-funded adversaries. But we are making steady progress. Here is a list of the 10 most important changes we have made:

1. Ads transparency. Advertising should be transparent: you should be able to see all the ads an advertiser is currently running on Facebook, Instagram and Messenger. And for issue and political ads, we’re creating an archive so you can search back seven years — including for information about ad impressions and spend, as well as demographic data such as age, gender and location. People in Canada and Ireland can already see all the ads that a Page is running on Facebook — and we’re launching this globally in June.

2. Verification and labeling. Every advertiser will now need confirm their ID and location before being able to run any political or issue ads in the US. All political and issue ads will also clearly state who paid for them.

3. Updating targeting. We want ads on Facebook to be safe and civil. We thoroughly review the targeting criteria advertisers can use to ensure they are consistent with our principles. As a result, we removed nearly one-third of the targeting segments used by the IRA. We continue to allow some criteria that people may find controversial. But we do see businesses marketing things like historical books, documentaries or television shows using them in legitimate ways.

4. Better technology. Over the past year, we’ve gotten increasingly better at finding and disabling fake accounts. We now block millions of fake accounts each day as people try to create them — and before they’ve done any harm. This is thanks to improvements in machine learning and artificial intelligence, which can proactively identify suspicious behavior at a scale that was not possible before — without needing to look at the content itself.

5. Action to tackle fake news. A key focus is working to disrupt the economics of fake news. For example, preventing the creation of fake accounts that spread it, banning sites that engage in this behavior from using our ad products, and demoting articles found to be false by fact checkers in News Feed — causing it to lose 80% of its traffic. We now work with independent fact checkers in the US, France, Germany, Ireland, the Netherlands, Italy, Mexico, Colombia, India, Indonesia and the Philippines with plans to scale to more countries in the coming months.

6. Significant investments in security. We’re doubling the number of people working on safety and security from 10,000 last year to over 20,000 this year. We expect these investments to impact our profitability. But the safety of people using Facebook needs to come before profit.

7. Industry collaboration. Last month, we joined 34 global tech and security companies in signing a TechAccord pact to help improve security for everyone.

8. Intelligence sharing with government. In the 2017 German elections, we worked closely with the authorities there, including the Federal Office for Information Security (BSI). This gave them a dedicated reporting channel for security issues related to the federal elections.

9. Tracking 40+ elections. In recent months, we’ve started to deploy new tools and teams to proactively identify threats in the run-up to specific elections. We first tested this effort during the Alabama Senate election, and plan to continue these efforts for elections around the globe, including the US midterms. Last year we used public service announcements to help inform people about fake news in 21 separate countries, including in advance of French, Kenyan and German elections.

10. Action against the Russia-based IRA. In April, we removed 70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — controlled by the IRA targeted at people living in Russia or Russian-speakers in Azerbaijan, Uzbekistan and Ukraine. The IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people in the US, Europe and Russia — and we don’t want them on Facebook anywhere in the world.

As our CEO and Founder, Mark Zuckerberg told Congress last month we need to take a broader view of our responsibilities as a company. That means not just building products that help people connect — but also ensuring that they are used for good and not abused. We still have a long way to go. And will keep you updated on our progress.

Facebook Community Boost Celebrates Small Business Week Confirms Dates and Announces 13 New US Cities

By Doug Frisbie, Global Marketing Director, Small Business

As our CEO Mark Zuckerberg has said, we need to make it easier for people to grow businesses or find jobs. The 80 million small businesses on Facebook represent one of the largest communities of small businesses in the world. And we are building more technology and new programs based on their feedback to help them grow, trade, and hire.

Today, in celebration of Small Business Week we are delighted to announce dates and new cities for Facebook events:

New dates for previously announced Facebook Community Boost cities:

  • Denver, CO — June 18-19
  • Hampton, VA — June 26-28
  • Phoenix, AZ — July 9-10
  • Buffalo, NY — July 9-11
  • Minneapolis, MN — July 18-19
  • Helena, MT — July 24-25
  • Columbus, OH — August 1-3
  • Menlo Park / East Palo Alto — August 27-30

New Facebook Community Boost Cities

  • San Diego, CA — August 6-7
  • Pittsburgh, PA — August 9-10
  • Topeka, KS — September 5-6
  • Springfield, MA — September 10-11
  • Jackson, MS — September 18-19
  • Atlanta, GA — September 24-26
  • Omaha, NE — Oct 2-4
  • Edison, NJ — Oct 8-9

New Boost Your Business Cities

  • Santa Fe, NM — May 3
  • Effingham, IL — May 3
  • New York City — May 22
  • Norwalk, CT — May 23
  • Greater Shreveport area — June 18

Our team is committed to investing in small businesses and their communities and we’ll continue sharing what we learn along the way. We’d also love to hear your thoughts at facebook.com/communityboost, where you can stay up to date with the latest city announcement, news, and program announcements.

F8 2018: Open AI Frameworks, New AR/VR Advancements, and Other Highlights from Day 2

The second day of F8 focused on the long-term technology investments we are making in three areas: connectivity, AI, and AR/VR. Chief Technology Officer Mike Schroepfer kicked off the keynote, followed by Engineering Director Srinivas Narayanan, Research Scientist Isabel Kloumann, and Head of Core Tech Product Management Maria Fernandez Guajardo.

From advances in bringing connectivity to more people throughout the world to state-of-the-art research breakthroughs in AI to the development of entirely new experiences in AR/VR, Facebook continues to build new technologies that will bring people closer together and help keep them safe.

Artificial Intelligence 

We view AI as a foundational technology, and we’ve made deep investments in advancing the state of the art through scientist-directed research. Today at F8, our artificial intelligence research and engineering teams shared a recent breakthrough: the teams successfully trained an image recognition system on a data set of 3.5 billion publicly available photos, using the hashtags on those photos in place of human annotations. This new technique will allow our researchers to scale their work much more quickly, and they’ve already used it to score a record-high 85.4% accuracy on the widely used ImageNet benchmark. We’ve already been able to leverage this work in production to improve our ability to identify content that violates our policies.

Mike Schroepfer discusses our image recognition work at F8 2018

This image recognition work is powered by our AI research and production tools: PyTorch, Caffe2, and ONNX. Today, we announced the next version of our open source AI framework, PyTorch 1.0, which combines the capabilities of all these tools to provide everyone in the AI research community with a fast, seamless path for building a broad range of AI projects. The technology in PyTorch 1.0 is already being used at scale, including performing nearly 6 billion text translations per day for the 48 most commonly used languages on Facebook. In VR, these tools have helped in deploying new research into production to make avatars move more realistically.

The PyTorch 1.0 toolkit will be available in beta within the next few months, making Facebook’s state-of-the-art AI research tools available to everyone. With it, developers can take advantage of computer vision advances like DensePose, which can put a full polygonal mesh overlay on people as they move through a scene — something that will help make AR camera applications more compelling.

For a deeper dive on all of today’s AI updates and advancements, including our open source work on ELF OpenGo, check out the posts on our Engineering Blog or visit facebook.ai/developers, where you can get tools and code to build your own applications.

AR/VR 

Facebook’s advancements in AR and VR draw from an array of research areas to help us create better shared experiences, regardless of physical distance. From capturing realistic-looking surroundings to producing next-generation avatars, we’re closer to making AR/VR experiences feel like reality.

Our research scientists have created a prototype system that can generate 3D reconstructions of physical spaces with surprisingly convincing results. The video below shows a side-by-side comparison between normal footage and a 3D reconstruction. It’s hard to tell the difference. (Hint: Look for the camera operator’s foot, which appears only in the regular video.)

Realistic surroundings are important for creating more immersive AR/VR, but so are realistic avatars. Our teams have been working on state-of-the-art research to help computers generate photorealistic avatars, seen below.

Connectivity  

These advances in AI and AR/VR are relevant only if you have access to a strong internet connection — and there are currently 3.8 billion people around the world who don’t have internet access. To increase connectivity around the world, we’re focused on developing next-generation technologies that can help bring the cost of connectivity down to reach the unconnected and increase capacity and performance for everyone else. In Uganda, we partnered with local operators to bring new fiber to the region that, when completed, will provide backhaul connectivity covering more than 3 million people and enable future cross-border connectivity to neighboring countries. Meanwhile, Facebook and City of San Jose employees have begun testing an advanced Wi-Fi network supported by Terragraph. Trials of Terragraph are also planned for Hungary and Malaysia. We are also working with hundreds of partners in the Telecom Infra Project to build and launch a variety of innovative, efficient network infrastructure solutions. And, as with our work in AI and other areas, we are sharing what we learn about connectivity so that others can benefit from it.

Watch the full keynote here.

To read more about yesterday’s announcements, read our Day 1 Roundup. For more details on today’s news, see our Developer BlogEngineering BlogOculus BlogMessenger BlogInstagram Press Center and Newsroom. You can also watch all F8 keynotes on the Facebook for Developers Page.

 

Downloads:

F8 2018: Using Technology to Remove the Bad Stuff Before It’s Even Reported

By Guy Rosen, VP of Product Management

There are two ways to get bad content, like terrorist videos, hate speech, porn or violence off Facebook: take it down when someone flags it, or proactively find it using technology. Both are important. But advances in technology, including in artificial intelligence, machine learning and computer vision, mean that we can now:

  • Remove bad content faster because we don’t always have to wait for it to be reported. In the case of suicide this can mean the difference between life and death. Because as soon as our technology has identified that someone has expressed thoughts of suicide, we can reach out to offer help or work with first responders, which we’ve now done in over a thousand cases.
  • Get to more content, again because we don’t have to wait for someone else to find it. As we announced two weeks ago, in the first quarter of 2018, for example, we proactively removed almost two million pieces of ISIS and al-Qaeda content — 99% of which was taken down before anyone reported it to Facebook.
  • Increase the capacity of our review team to work on cases where human expertise is needed to understand the context or nuance of a particular situation. For instance, is someone talking about their own drug addiction, or encouraging others to take drugs?

It’s taken time to develop this software – and we’re constantly pushing to improve it. We do this by analyzing specific examples of bad content that have been reported and removed to identify patterns of behavior. These patterns can then be used to teach our software to proactively find other, similar problems.

  • Nudity and graphic violence: These are two very different types of content but we’re using improvements in computer vision to proactively remove both.
  • Hate speech: Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? We’ve started using technology to proactively detect something that might violate our policies, starting with certain languages such as English and Portuguese. Our teams then review the content so what’s OK stays up, for example someone describing hate they encountered to raise awareness of the problem.
  • Fake accounts: We block millions of fake accounts every day when they are created and before they can do any harm. This is incredibly important in fighting spam, fake news, misinformation and bad ads. Recently, we started using artificial intelligence to detect accounts linked to financial scams.
  • Spam: The vast majority of our work fighting spam is done automatically using recognizable patterns of problematic behavior. For example, if an account is posting over and over in quick succession that’s a strong sign something is wrong.
  • Terrorist propaganda: The vast majority of this content is removed automatically, without the need for someone to report it first.
  • Suicide prevention: As explained above, we proactively identify posts which might show that people are at risk so that they can get help.

When I talk about technology like artificial intelligence, computer vision or machine learning people often ask why we’re not making progress more quickly. And it’s a good question. Artificial intelligence, for example, is very promising but we are still years away from it being effective for all kinds of bad content because context is so important. That’s why we have people still reviewing reports.

And more generally, the technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. It’s why we can typically do more in English as it is the biggest data set we have on Facebook.

But we are investing in technology to increase our accuracy across new languages. For example, Facebook AI Research (FAIR) is working on an area called multi-lingual embeddings as a potential way to address the language challenge. And it’s why we sometimes may ask people for feedback if posts contain certain types of content, to encourage people to flag it for review. And it’s why reports that come from people who use Facebook are so important – so please keep them coming. Because by working together we can help make Facebook safer for everyone.

A New Investment in Community Leaders

By Jennifer Dulski, Head of Groups and Community, and Ime Archibong, Vice President, Product Partnerships

Today at the Facebook Communities Summit Europe, we announced the Facebook Community Leadership Program, a global initiative that invests in people building communities. Facebook will commit tens of millions of dollars to the program, including up to $10 million in grants that will go directly to people creating and leading communities.

In addition, we introduced new tools for group admins and the expansion of our London-based engineering team that builds technology to help keep people safe on Facebook.

More than 300 community leaders from across Europe came together today in London including Blind Veterans UK, an advocacy organization that provides practical and emotional support to blind veterans and their families; Donna Mamma, a support group for mothers in France to share advice and information; Girl Skate UK, which celebrates and brings together the female skateboarding community; High Society PL, a group of sneaker enthusiasts who bond over their shared passion; and Berlin Bruisers, Germany’s first gay and inclusive rugby club.

Facebook Community Leadership Program

Community leaders often tell us that with additional support they could have more impact. The Facebook Community Leadership Program is designed to empower leaders from around the world who are building communities through the Facebook family of apps and services. It includes:

  • Residency and Fellowship opportunities offer training, support and funding for community leaders from around the world.
    • Up to five leaders will be selected to be community leaders in residence and awarded up to $1,000,000* each to fund their proposals.
    • Up to 100 leaders will be selected for our fellowship program and will receive up to $50,000* each to be used for a specific community initiative.
  • Community Leadership Circles bring local community leaders together to meet up in person to connect, learn and collaborate. We piloted three circles in the US in 2017 and will be expanding globally this year.
  • Groups for Facebook Power Admins, which we currently run with more than 10,000 group admins in the US and UK, are expanding to more members to help them share advice with one another and connect with our team to test new features and share feedback.

Applications are now open for the residency and fellowship. To learn more and apply, visit communities.fb.com.

New Tools for Group Admins and Members

Group admins want to keep their communities safe, organized and engaged. Today we added four new features to support them.

  • Admin tools: Admins can now find member requests, Group Insights and more together in one place, making it easier to manage groups and freeing up more time for admins to connect with members.
  • Group announcements: Group admins want to be able to more easily share updates, so we’re introducing group announcements to let admins post up to 10 announcements that appear at the top of their group.
  • Group rules: Keeping communities safe is important. Now admins can create a dedicated rules section to help them effectively communicate the rules of the group to their members.
  • Personalization: Each community has its own identity — now admins can add a personalized color that is displayed throughout their group.

Expanding London Engineering Team for Community Safety

A team of engineers across the globe builds technologies that help keep our community safe and secure. London is home to Facebook’s largest engineering hub outside of the US, and by the end of 2018, we will double the number of people working in London on these issues.

Our engineering work on community safety includes the following:

  • Detecting and stopping fake accounts: Working to make sure Facebook is a community of people who can connect authentically with real people.
  • Protecting people from harm: Reducing things like harassment and scams that can happen in the real world and on Facebook, by building better tools to spot these issues and remove them.
  • Improving ways to report content: Making it easier for people to give us feedback about things that shouldn’t be on our platform, which works in conjunction with our automated detection.

People find meaning and support in community, online and in person. The programs and tools we announced today are designed to help the admins who lead these communities to grow and strengthen bonds among members. We are inspired by these leaders and look forward to continuing our efforts to support them.

*The final payment amount in USD may vary due to potential exchange rate fluctuation at the time of payment. At the time of announcement, $1,000,000 USD equates to approximately €810,000 Euros or £718,000 GBP; $50,000 USD equates to €40,500 Euros or £35,900 GBP.

 

Safer Internet Day: Teaching Children to Safely Engage Online and Supporting Parent Conversations

By Antigone Davis, Global Head of Safety

Every year on Safer Internet Day we recognize the importance of internet safety and the responsible use of online technology. This year, Safer Internet Day features a call to action focused on creating a better internet for everyone, including younger generations. As a company that reaches people around the world, we’re taking this call to action to heart.

Creating a better internet for kids starts with empowering parents. The fact that that parents see themselves as the best judges of how their kids should use technology helped guide our development of the Messenger Kids app. Parents control their kids’ accounts and contacts through the Messenger Kids Controls panel, creating a safer and more controlled environment for their kids to talk to trusted contacts.

As a mom and Facebook’s Global Head of Safety, I know how overwhelming it can be to raise a child in an increasingly digital world. So this year to mark Safer Internet Day, we want to help parents start a conversation with their children about technology and the choices they make when they go online.

Tips to Keep Kids Safe

We often hear that parents aren’t sure how to approach these topics with their kids. To make it easier, we’ve compiled some tips to jump-start the conversation.

  • Let your child know the same rules apply online as they do offline. Just as you’d tell your child to look both ways before crossing the street or to wear a helmet while riding their bike, teach them to think before they share online and how to use the security and safety tools available on apps and devices.
  • Be a good role model. The saying that children will “do as you do, not as you say” is as true online as it is offline. If you set time restrictions on when your child can use social media or be online, follow the same rules yourself.
  • Engage early and often. Data suggests that parents should be a part of what their children are doing online as soon as they start to participate. Consider adding them as a friend them when they create a social media account or an account on a messaging app, and have conversations with them often about what they’re doing and who they are talking when they go online.
  • Set the rules and know the tools. When your child gets their first tablet or phone and starts using apps, it’s a good time to set ground rules. It’s also a great time to take them through the basics of the tools available on the app. For instance, teach them how to report a piece of content and how to spot people who don’t have good intentions.
  • Ask your children to teach you. Children are often even more in touch with the newest apps and sites than adults, and they can be an excellent resource. The conversation can also serve as an opportunity to talk about issues of safety, privacy and security.

Listening to Parents

We recently conducted a survey of parents to get a fuller and clearer understanding of their attitudes toward technology. The survey found that:

  • 64% of parents trust themselves the most to guide their child’s technology use.*
  • 77% of parents say they are the most appropriate to determine how much time their child spends using online technologies.
  • 77% of parents say they are the most appropriate to decide the right age for their child to use online digital technologies.

When creating products for kids, we know we have to get it right. That means going beyond the basics of complying with child privacy laws. It’s why we we’ve been talking to thousands of parents and top experts in the fields of child development, online safety and children’s media. It’s also why we’re investing in further research about kids and technology.

We’ve committed resources to partner with independent academics on research studies about kids, tweens and teens and technology. Our goal is to better understand the connection between young people’s well-being and how they use digital technology. We will also convene conversations with stakeholders over the course of this year, beginning with our Global Safety Network Summit in Washington, D.C., this March.

Introducing Parent Conversations: A New Section of the Parents Portal

We want to provide parents with information to make the decisions that are best for their families.

Today we’re launching a new section of our Parents Portal where parents can find the latest information from child development experts, academics, thought leaders and people at Facebook about topics related to kids and technology. We’ll post videos and Q&As, as well as interactive polls so parents can express their voice in these important conversations. To visit Parent Conversations and find tips on keeping your kids safe online in today’s digital age, visit facebook.com/Safety/Parents/Conversations.

*In February 2018, Facebook conducted an unbranded survey with an online panel provider. The participants were a nationally representative sample of 275 US parents of 6th – 12th graders and 604 children aged 8-17.

Giving You More Control of Your Privacy on Facebook

By Erin Egan, Chief Privacy Officer

As part of Data Privacy Day, we’re introducing a new education campaign to help you understand how data is used on Facebook and how you can manage your own data. We’re also announcing plans to make your core privacy settings easier to find, and sharing our privacy principles for the first time. These principles guide our work at Facebook.

Helping You Take Control of Your Data on Facebook
You have many ways to control your data on Facebook. This includes tools to make sure you share only what you want with the people you want to see it. But privacy controls are only powerful if you know how to find and use them. Starting today we’re introducing educational videos in News Feed that help you get information on important privacy topics like how to control what information Facebook uses to show you ads, how to review and delete old posts, and even what it means to delete your account.

We’re also inviting people to take our Privacy Checkup and sharing privacy tips in education campaigns off Facebook, including ads on other websites. We’ll refresh our education campaigns throughout the year to give you tips on different topics.

Making Privacy Settings Easier to Find
We know how important it is for you to have clear, simple tools for managing your privacy. This year, we’ll introduce a new privacy center that features core privacy settings in a single place. We’re designing this based on feedback from people, policymakers and privacy experts around the world.

Facebook’s Privacy Principles
Our efforts to build data protection into our products and give you more information and control reflect core principles we’ve had on privacy. Today we’re sharing these principles for the first time here, and we’ve included them below.

We’re also developing resources that help other organizations build privacy into their services. For example, throughout 2018 we’re hosting workshops on data protection for small and medium businesses, beginning in Europe with a focus on the new General Data Protection Regulation. We hosted our first workshop in Brussels last week and published a guide for frequently asked questions. Around the world we’ll continue to host Design Jams that bring designers, developers, privacy experts and regulators together to create new ways of educating people on privacy and giving them control of their information.

We’ll keep improving our privacy tools and look forward to hearing what you think.

 

Facebook’s Privacy Principles

Facebook was built to bring people closer together. We help you connect with friends and family, discover local events and find groups to join. We recognize that people use Facebook to connect, but not everyone wants to share everything with everyone – including with us. It’s important that you have choices when it comes to how your data is used. These are the principles that guide how we approach privacy at Facebook.

We give you control of your privacy
You should be able to make the privacy choices that are right for you. We want to make sure you know where your privacy controls are and how to adjust them. For example, our audience selector tool lets you decide who you share with for every post. We develop controls based on feedback from around the world.

We help people understand how their data is used
While our Data Policy describes our practices in detail, we go beyond this to give you even more information. For example, we include education and tools in people’s day-to-day use of Facebook – like ad controls in the top right corner of every ad.

We design privacy into our products from the outset
We design privacy into Facebook products with guidance from experts in areas like data protection and privacy law, security, interface design, engineering, product management, and public policy. Our privacy team works to build these diverse perspectives into every stage of product development.

We work hard to keep your information secure
We work around the clock to help protect people’s accounts, and we build security into every Facebook product. Our security systems run millions of times per second to help catch threats automatically and remove them before they ever reach you. You can also use our security tools like two-factor authentication to help keep your account even more secure.

You own and can delete your information
You own the information you share on Facebook. This means you decide what you share and who you share it with on Facebook, and you can change your mind. That’s why we give you tools for deleting anything you’ve posted. We remove it from your timeline and from our servers. You can also delete your account whenever you want.

Improvement is constant
We’re constantly working to develop new controls and design them in ways that explain things to people clearly. We invest in research and work with experts beyond Facebook including designers, developers, privacy professionals and regulators.

We are accountable
In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies.

Managing Your Identity on Facebook with Face Recognition Technology

By Joaquin Quiñonero Candela, Director, Applied Machine Learning

Today we’re announcing new, optional tools to help people better manage their identity on Facebook using face recognition. Powered by the same technology we’ve used to suggest friends you may want to tag in photos or videos, these new features help you find photos that you’re not tagged in and help you detect when others might be attempting to use your image as their profile picture. We’re also introducing a way for people who are visually impaired to know more about who is in the photos they encounter on Facebook.

People gave us feedback that they would find it easier to manage face recognition through a simple setting, so we’re pairing these tools with a single “on/off” control. If your tag suggestions setting is currently set to “none,” then your default face recognition setting will be set to “off” and will remain that way until you decide to change it.

Know When You Appear in Photos on Facebook

Now, if you’re in a photo and are part of the audience for that post, we’ll notify you, even if you haven’t been tagged. You’re in control of your image on Facebook and can make choices such as whether to tag yourself, leave yourself untagged, or reach out to the person who posted the photo if you have concerns about it. We always respect the privacy setting people select when posting a photo on Facebook (whether that’s friends, public or a custom audience), so you won’t receive a notification if you’re not in the audience.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Profile Photo Safety

We want people to feel confident when they post pictures of themselves on Facebook so we’ll soon begin using face recognition technology to let people know when someone else uploads a photo of them as their profile picture. We’re doing this to prevent people from impersonating others on Facebook.

New Tools for People with Visual Impairments

We’re always working to make it easier for all people, regardless of ability, to access Facebook, make connections and have more opportunities. Two years ago, we launched an automatic alt-text tool, which describes photos to people with vision loss. Now, with face recognition, people who use screen readers will know who appears in photos in their News Feed even if people aren’t tagged.

How it Works and the Choices You Have

Since 2010, face recognition technology has helped bring people closer together on Facebook. Our technology analyzes the pixels in photos you’re already tagged in and generates a string of numbers we call a template. When photos and videos are uploaded to our systems, we compare those images to the template.

You control whether Facebook can recognize you in photos and videos. Soon, you will begin to see a simple on/off switch instead of settings for individual features that use face recognition technology. We designed this as an on/off switch because people gave us feedback that they prefer a simpler control than having to decide for every single feature using face recognition technology. To learn more about all of these features, visit the Help Center or your account settings.

We are introducing these new features in most places, except in Canada and the EU where we don’t currently offer face recognition technology.

Reinforcing Our Commitment to Transparency

By Chris Sonderby, Deputy General Counsel

Today we are releasing our Transparency Report, previously called the Government Requests Report, for the first half of 2017. For the first time, we are expanding the report beyond government requests to provide data regarding reports from rights holders related to intellectual property (IP) — covering copyright, trademark, and counterfeit. The report also includes the same categories of information we’ve disclosed in the past, with updates on government requests for account data, content restrictions, and internet disruptions.

We believe that sharing information about IP reports we receive from rights holders is an important step toward being more open and clear about how we protect the people and businesses that use our services. Our Transparency Report describes these policies and procedures in more detail, along with the steps we’ve taken to safeguard the people who use Facebook and keep them informed about IP. It also includes data covering the volume and nature of copyright, trademark, and counterfeit reports we’ve received and the amount of content affected by those reports. For example, in the first half of 2017, we received 224,464 copyright reports about content on Facebook, 41,854 trademark reports, and 14,279 counterfeit reports.

In addition to our new section on intellectual property, we are also providing our usual twice-a-year update on government requests for account data, content restrictions based on local law, and information about internet disruptions in the first half of this year.

Requests for account data increased by 21% globally compared to the second half of 2016, from 64,279 to 78,890. Fifty-seven percent of the data requests we received from law enforcement in the U.S. contained a non-disclosure order that prohibited us from notifying the user, up from 50% in our last report. Additionally, as a result of transparency reforms introduced in 2016 by the USA Freedom Act, the U.S. government notified us that it was lifting the non-disclosure order on five National Security Letters (NSLs) we previously received between 2012 and 2015. Copies of the NSLs, as well as the government’s authorization letters are available for download below.

We continue to carefully scrutinize each request we receive for account data — whether from an authority in the U.S., Europe, or elsewhere — to make sure it is legally sufficient. If a request appears to be deficient or overly broad, we push back, and will fight in court, if necessary. We’ll also keep working with partners in industry and civil society to encourage governments around the world to reform surveillance in a way that protects their citizens’ safety and security while respecting their rights and freedoms.

Overall, the number of content restrictions for violating local law increased by 304% globally, compared to the second half of 2016, from 6,944 to 28,036. This increase was primarily driven by a request from Mexican law enforcement to remove instances of a video depicting a school shooting in Monterrey in January. We restricted access in Mexico to 20,506 instances of the video in the first half of 2017.

Meanwhile, there were 52 disruptions of Facebook services in nine countries in the first half of 2017, compared to 43 disruptions in 20 countries in the second half of 2016. We continue to be deeply concerned by internet disruptions, which can create barriers for businesses and prevent people from sharing and communicating with their family and friends.

Publishing this report reinforces our important commitment to transparency as we build community and bring the world closer together.

Please see the full report for more information.

Moving to a Local Selling Model

By Dave Wehner, Chief Financial Officer

Today we are announcing that Facebook has decided to move to a local selling structure in countries where we have an office to support sales to local advertisers. In simple terms, this means that advertising revenue supported by our local teams will no longer be recorded by our international headquarters in Dublin, but will instead be recorded by our local company in that country.

We believe that moving to a local selling structure will provide more transparency to governments and policy makers around the world who have called for greater visibility over the revenue associated with locally supported sales in their countries.

It is our expectation that we will make this change in countries where we have a local office supporting advertisers in that country. That said, each country is unique, and we want to make sure we get this change right. This is a large undertaking that will require significant resources to implement around the world. We will roll out new systems and invoicing as quickly as possible to ensure a seamless transition to our new structure. We plan to implement this change throughout 2018, with the goal of completing all offices by the first half of 2019.

Our headquarters in Menlo Park, California, will continue to be our US headquarters and our offices in Dublin will continue to be the site of our international headquarters.

Sharing Facebook’s Policy on Sexual Harassment

By Sheryl Sandberg, Chief Operating Officer, and Lori Goler, VP of People 

Harassment, discrimination, and retaliation in the workplace are unacceptable but have been tolerated for far too long.

At Facebook, we treat any allegations of such behavior with great seriousness, and we have invested significant time and resources into developing our policies and processes. Many people have asked if we’d be willing to share our policies and training guidelines, so today we are making them available publicly—not because we think we have all the answers, but because we believe that the more companies are open about their policies, the more we can all learn from one another. These are complicated issues, and while we don’t believe any company’s enforcement or policies are perfect, we think that sharing best practices can help us all improve, especially smaller companies that may not have the resources to develop their own policies. Every company should aspire to doing the hard and continual work necessary to build a safe and respectful workplace, and we should all join together to make this happen.

You can find Facebook’s internal policies on sexual harassment and bullying on our Facebook People Practices website, along with details of our investigation process and tips and resources we have found helpful in preparing our Respectful Workplace internal trainings. You’ll see that our philosophy on harassment, discrimination, and bullying is to go above and beyond what is required by law. Our policies prohibit intimidating, offensive, and sexual conduct even when that conduct might not meet the legal standard of harassment. Even if it’s legally acceptable, it’s not the kind of behavior we want in our workplace.

In developing our policies we were guided by six basic principles:

  • First, develop training that sets the standard for respectful behavior at work, so people understand what’s expected of them right from the start. In addition to prescribing mandatory harassment training, we wrote our own unconscious bias training program at Facebook, which is also available publicly on our People Practices website
  • Second, treat all claims—and the people who voice them—with seriousness, urgency, and respect. At Facebook, we make sure to have HR business partners available to support everyone on the team, not just senior leaders.
  • Third, create an investigation process that protects employees from stigma or retaliation. Facebook has an investigations team made up of experienced HR professionals and lawyers trained to handle sensitive cases of sexual harassment and assault.
  • Fourth, follow a process that is consistently applied in every case and is viewed by employees as providing fair procedures for both victims and those accused.
  • Fifth, take swift and decisive action when it is determined that wrongdoing has occurred. We have a zero tolerance policy, and that means that when we are able to determine that harassment has occurred, those responsible are fired. Unfortunately, in some cases investigations are inconclusive and come down to one person’s word against another’s. When we don’t feel we can make a termination decision, we take other actions designed to help everyone feel safe, including changing people’s roles and reporting.
  • Sixth, make it clear that all employees are responsible for keeping the workplace safe—and anyone who is silent or looks the other way is complicit.

There’s no question that it is complicated and challenging to get this right. We are by no means perfect, and there will always be bad actors. Unlike law enforcement agencies, companies don’t have access to forensic evidence and instead have to rely on reported conversations, written evidence, and the best judgment of investigators and legal experts. What we can do is be as transparent as possible, share best practices, and learn from one another—recognizing that policies will evolve as we gain experience. We don’t have everything worked out at Facebook on these issues, but we will never stop striving to make sure we have a safe and respectful working environment for all our people.

Update on the Global Internet Forum to Counter Terrorism

At last year’s EU Internet Forum, Facebook, Microsoft, Twitter and YouTube declared our joint determination to curb the spread of terrorist content online. Over the past year, we have formalized this partnership with the launch of the Global Internet Forum to Counter Terrorism (GIFCT). We hosted our first meeting in August where representatives from the tech industry, government and non-governmental organizations came together to focus on three key areas: technological approaches, knowledge sharing, and research. Since then, we have participated in a Heads of State meeting at the UN General Assembly in September and the G7 Interior Ministers meeting in October, and we look forward to hosting a GIFCT event and attending the EU Internet Forum in Brussels on the 6th of December.

The GIFCT is committed to working on technological solutions to help thwart terrorists’ use of our services, and has built on the groundwork laid by the EU Internet Forum, particularly through a shared industry hash database, where companies can create “digital fingerprints” for terrorist content and share it with participating companies.

The database, which we announced our commitment to building last December and became operational last spring, now contains more than 40,000 hashes. It allows member companies to use those hashes to identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.

We are pleased that Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and Snap have also recently joined this hash-sharing consortium, and we will continue our work to add additional companies throughout 2018.

In order to disrupt the distribution of terrorist content across the internet, companies have invested in collaborating and sharing expertise with one another. GIFCT’s knowledge-sharing work has grown quickly in large measure because companies recognize that in countering terrorism online we face many of the same challenges.

Although our companies have been sharing best practices around counterterrorism for several years, in recent months GIFCT has provided a more formal structure to accelerate and strengthen this work. In collaboration with the Tech Against Terror initiative — which recently launched a Knowledge Sharing Platform with the support of GIFCT and the UN Counter-Terrorism Committee Executive Directorate — we have held workshops for smaller tech companies in order to share best practices on how to disrupt the spread of violent extremist content online.

Our initial goal for 2017 was to work with 50 smaller tech companies to to share best practices on how to disrupt the spread of violent extremist material. We have exceeded that goal, engaging with 68 companies over the past several months through workshops in San Francisco, New York, and Jakarta, plus another workshop next week in Brussels on the sidelines of the EU Internet Forum.

We recognize that our work is far from done, but we are confident that we are heading in the right direction. We will continue to provide updates as we forge new partnerships and develop new technology in the face of this global challenge.

Facebook Social Good Forum: Announcing New Tools and Initiatives for Communities to Help Each Other

By Naomi Gleit, VP Social Good

Today at the second annual Social Good Forum, we announced new tools and initiatives to help people keep each other safe and supported on Facebook.

  • Mentorship and Support, a new product where mentees and mentors come together to connect and interact directly with each other and progress through a guided program developed by nonprofit organizations
  • Eliminating nonprofit fees, 100% of donations made through Facebook payments to nonprofits will now go directly to those organizations
  • Facebook Donations Fund, $50 million annual fund for 2018 to help communities recover from disaster by direct contributions and matching dollars, to increase the impact of our community’s support during crises like a major natural disaster. The fund will also help more people support causes that they care about, as well as help nonprofits increase the amount raised by their supporters for campaigns like Giving Tuesday
  • Charitable giving tools expansion, people can now create fundraisers in places like Europe, Australia, Canada and New Zealand
  • Fundraisers API, the ability for people to sync their off-Facebook fundraising to their Facebook fundraisers
  • Community Help API, a new tool that will give disaster response organizations access to Community Help data, offering important information about the needs of people affected by crises so that they can respond
  • Blood donations feature, more than 4 million donors in India have signed up, expanding to connect blood banks and hospitals to donors through blood donation events, and introducing the feature in Bangladesh in early 2018

Introducing Mentorship and Support
Mentorship and Support is a new product that connects people who may need support and advice to achieve their goals with people who have the expertise and experience to help. The mentee and mentor are matched by a nonprofit partner organization and work through a step-by-step program on Facebook developed by the nonprofit organization and tailored to the needs of the mentee.

We are starting as a pilot with iMentor (for education) and The International Rescue Committee (for crisis recovery.) Our goal is to expand these tools to help connect people around a variety of causes like addiction recovery, career advancement, and other areas where having someone you can count on for support can make all the difference.

We take privacy and security very seriously, and this product is being built with both in mind. It is only available to people 18 years and older. Mentors are vetted by the partner organizations before they are matched with mentees, and people can also report issues to Facebook if they encounter problems.
Mentorship & Support

Expanding our charitable giving tools globally
Nonprofit fundraising tools (including donate buttons and nonprofit fundraisers) allow people to raise money for nonprofit organizations, and are now available in the United Kingdom, Ireland, Germany, France, Spain, Italy, Poland, the Netherlands, Belgium, Sweden, Portugal, Denmark, Norway, Austria, Finland and Luxembourg.

Personal fundraisers allow people to raise money for themselves, a friend or something or someone not on Facebook, and are now available in the United Kingdom, Ireland, Canada, Australia, Germany, Spain, Italy, Netherlands, Belgium, Portugal, Austria, Finland, Luxembourg, Sweden, Denmark and New Zealand.

Fundraisers

New Fundraisers API
People will be able to sync their off-Facebook fundraising efforts to Facebook fundraisers, making it easier to tell friends and family about the causes they support on and off Facebook. When people connect their off-Facebook fundraising campaign with Facebook, it creates a Facebook fundraiser that syncs with their campaign page.

Connecting to Facebook can help participants meet their goal faster by allowing them to easily reach all of their Facebook friends. Friends can share the fundraiser with others, spreading the word and reaching new donors. And donors can give in just a few taps without ever leaving Facebook. We are starting with Susan G. Komen, JDRF, National Multiple Sclerosis Society and Movember, and will be rolling this out to 500 additional nonprofits by the end of spring 2018.

New Community Help API
Earlier this year we announced Community Help, a crisis response tool where people can ask for and give the help they need to recover following a crisis. We are now introducing a Community Help API, which will give disaster response organizations access to data from public Community Help posts that can offer important information about the needs of people affected by a particular crisis. We are piloting the Community Help API with NetHope and the American Red Cross. Our hope is that this data will help organizations coordinate information and response resources as fast as possible. We plan to announce more partnerships soon.

Expanding Blood Donations Feature
In October, we launched a new blood donations feature, starting in India, to make it easier for people to donate blood. There are now more than 4 million blood donors signed up on Facebook in India. In addition to enabling people in need to connect to blood donors, our tools also allow organizations to connect to donors more efficiently. Hospitals, blood banks and non-profits can create voluntary blood donation events on Facebook, and nearby donors are notified of the opportunities to donate blood. In early 2018, we will expand blood donations to Bangladesh, where, like India, there are thousands of posts from people looking for blood donors every week.

Blood Donations Camp Event

We are constantly inspired by all the good that people do on Facebook and are committed to continuing to build tools that help communities do more good together.

Downloads:
Social Good Forum Live Broadcast
Social Good Forum Event Photos
Social Good Products Screenshots & Demos
SGF One Sheeter

Our Advertising Principles

By Rob Goldman, VP Ad Products

Our advertising team works to make meaningful connections between businesses and people. That’s a high bar, given many people come to Facebook, Instagram and Messenger to connect with their friends and family. Our goal is to show ads that are as relevant and useful as the other content you see. If we do this effectively, advertising on Facebook can also help businesses large and small increase their sales and hire more people — as research published recently showed.

While the world and our services are always evolving, we thought it would be helpful to lay out the principles that guide our decision making when it comes to advertising across Facebook, Messenger and Instagram.

We build for people first.
Advertising is how we provide our services for free. But ads shouldn’t be a tax on your experience. We want ads to be as relevant and useful to you as the other posts you see. This is important for businesses too, because you’re less likely to respond to ads that are irrelevant or annoying. That’s why we start with people. Our auction system, which determines which ads get shown to you, prioritizes what’s most relevant to you, rather than how much money Facebook will make from any given ad.

We don’t sell your data.
We don’t sell personal information like your name, Facebook posts, email address, or phone number to anyone. Protecting people’s privacy is central to how we’ve designed our ad system. This means we can show you relevant and useful ads – and provide advertisers with meaningful data about the performance of their ads — without advertisers learning who you are.

You can control the ads you see.
Clicking on the upper right-hand corner of an ad lets you easily hide ads you don’t like, or block ads from an advertiser you don’t like. Clicking on “Why am I seeing this?” tells you more about why you were shown the ad and takes you to your Ad Preferences. Anyone can visit their Ad Preferences to learn more about the interests and information that influence the ads they see, and manage this information so they get more relevant ads.

Advertising should be transparent.
You should be able to easily understand who is showing ads to you and see what other ads that advertiser is running. It’s why we’re building an ads transparency feature that will let you visit any Facebook Page and see the ads that advertiser is running, whether or not those ads are being shown to you. This will not only make advertising on Facebook more transparent; it will also hold advertisers accountable for the quality of ads they create.

Advertising should be safe and civil; it should not divide or discriminate. 
We have Community Standards that prohibit hate speech, bullying, intimidation and other kinds of harmful behavior. We hold advertisers to even stricter advertising policies to protect you from things like discriminatory ads – and we have recently tightened our ad policies even further. We don’t want advertising to be used for hate or discrimination, and our policies reflect that. We review many ads proactively using automated and manual tools, and reactively when people hide, block or mark ads as offensive. When we review an ad, we look at its content, targeting, landing page and the identity of the advertiser. We may not always get it right, but our goal is to prevent and remove content that violates our policies without censoring public discourse.

Advertising should empower businesses big and small.
We believe that smaller businesses should have access to the same tools previously available only to larger companies with sophisticated marketing teams. We have millions of advertisers — from local businesses to community organizations — who depend upon us to reach their audiences, grow their businesses and create more jobs. As long as they follow our Community Standards and policies that help keep people safe, our platform should empower all advertisers with all voices to reach relevant audiences or build a community.

We’re always improving our advertising.
We’re always making improvements and investing in what works. As people’s behaviors change, we’ll continue listening to feedback to improve the ads people see on our service. For instance, when people shifted to mobile, we did, too. We know our work isn’t done by any means, which means we’ll often introduce, test and update certain features like ad formats, metrics and ad controls.

Continuing Transparency on Russian Activity

A few weeks ago, we shared our plans to increase the transparency of advertising on Facebook. This is part of our ongoing effort to protect our platforms and the people who use them from bad actors who try to undermine our democracy.

As part of that continuing commitment, we will soon be creating a portal to enable people on Facebook to learn which of the Internet Research Agency Facebook Pages or Instagram accounts they may have liked or followed between January 2015 and August 2017. This tool will be available for use by the end of the year in the Facebook Help Center.

It is important that people understand how foreign actors tried to sow division and mistrust using Facebook before and after the 2016 US election. That’s why as we have discovered information, we have continually come forward to share it publicly and have provided it to congressional investigators. And it’s also why we’re building the tool we are announcing today.

Facebook Community Boost: Helping to Create Jobs and Provide Digital Skills Across the US

By Dan Levy, VP, Small Business

Today we’re introducing Facebook Community Boost, a new program to help US small businesses grow and to equip more people with the digital skills they need to compete in the new economy.

Facebook Community Boost will visit 30 US cities in 2018, including Houston, St. Louis, Albuquerque, Des Moines and Greenville, South Carolina. Facebook will work with local organizations to provide digital skills and training for people in need of work, to advise entrepreneurs how to get started and to help existing local businesses and nonprofits get the most out of the internet.

Since 2011 Facebook has invested more than $1 billion to support small businesses. Boost Your Business has trained more than 60,000 small businesses in the US and hundreds of thousands more around the world. More than 1 million small businesses have used Facebook’s free online learning hub, Blueprint, and more than 70 million small businesses use our free Pages tool to create an online presence. And we recently created a digital marketing curriculum that will help train 3,000 Michigan residents in digital skills development over the next two years.

According to new research by Morning Consult in partnership with the US Chamber of Commerce Technology Engagement Center and Facebook, small businesses’ use of digital translates into new jobs and opportunities for communities across the country. Small businesses provide opportunities for millions of people (they create an estimated four out of every five new jobs in the US), offer useful products and services, and often provide a place for people to come together. In addition:

  • 80% of US small and medium businesses on Facebook say the platform helps them connect to people in their local community.
  • One in three US small and medium sized businesses on Facebook say they built their business on the platform; and 42% say they’ve hired more people due to growth since joining Facebook.
  • Businesses run by African Americans, Latinos, veterans and those with a disability are twice as likely to say that their business was built on Facebook, and one and a half times more likely to say they’ve hired more people since joining the platform.
  • 56% of US small and medium sized businesses on Facebook say they increased sales because of the platform; 52% of say Facebook helps them sell products to other cities, states, and countries.

We want to do more to support communities across America – particularly for those who are transitioning to careers that require more digital skills. It’s why we’re introducing Facebook Community Boost.

  • If you’re looking for a job, we’ll provide training to help improve your digital and social media skills. According to the research, 62% percent of US small businesses using Facebook said having digital or social media skills is an important factor in their hiring decisions — even more important than where a candidate went to school.
  • If you’re an entrepreneur, we’ll have training programs on how to use technology to turn an idea into a business or show you ways to create a free online presence using Facebook.
  • If you’re a business owner we’re going to offer ways your business can expand its digital footprint and find new customers around the corner and around the globe.
  • If you’re getting online for the first time or you want to support your community, we’ll provide training on digital literacy and online safety. And we’ll also help community members use technology to bring people together, with features like Events and Groups.

Facebook Community Boost was developed based on requests from the small business community that Facebook spend more time in their cities and provide more training. We’re just getting started, and are pleased to partner with community leaders to learn where we can help. We are especially grateful to the governors and mayors representing each of the five cities where we’re starting in 2018, including:

  • Houston Mayor Sylvester Turner: “We’re happy to welcome Facebook to Houston to boost our residents’ digital skills and make sure our vibrant community of entrepreneurs and small businesses gets more out of the internet. I’m glad that Facebook recognized that one of the first five cities to benefit from this program should be Houston, the most diverse city in the nation, the largest economic engine of Texas and a proving ground not only for innovation in tech, energy, medicine and space exploration but also for mom-and-pop small businesses that reflect all the cultures of America and the globe.”
  • Texas Governor Greg Abbott: “Small businesses are the backbone of the Texas economy. My goal as governor is to help small businesses and the non-profit sector grow even faster in Texas, and I’m grateful for the help of industry movers like Facebook. I applaud the decision by Facebook to choose Houston, TX as one of the five U.S. cities for this important and exciting event, and for their investment in the entrepreneurial spirit of Texas.”
  • New Mexico Governor Susana Martinez: “It’s clear when it comes to innovation and economic development New Mexico and Facebook make great partners. We are proud Facebook has selected Albuquerque as one of only five of the first locations in the country to offer its new Community Boost program. Residents and small businesses in New Mexico will have the opportunity to build stronger digital skills to get the most out of the digital marketplace. We will be working with Facebook to ensure our business have access to this exciting opportunity because New Mexicans deserve a diversified economy for the future.”
  • Albuquerque Mayor Richard Berry: “As the City of Albuquerque is the second most digital city in nation, we are excited and welcome Facebook to Albuquerque for its new community boost! This initiative will help train our residents in developing key skills to help them thrive in our ever evolving digital world. We look forward to working closely with Facebook in coordinating the weeklong event to ensure Albuquerque and New Mexico small businesses are aware and can participate to the fullest extent. We hope this will help bring more tech skills and better job opportunities to our wonderful community.”
  • Iowa Governor Kim Reynolds: “It’s my priority to ensure that Iowans have the skills they need to fill the jobs of today and tomorrow, and to ensure that Iowa businesses have the tools they need to thrive in the digital economy. So I’m thrilled that Facebook has chosen Des Moines as a pilot city to launch its new Community Boost Program. Facebook has been a great partner in Iowa, and my administration looks forward to working with them on this wonderful project.”
  • South Carolina Governor Henry McMaster: “The fact that Facebook has chosen Greenville as one of the first cities to offer its new Community Boost is something that all of South Carolina can be proud of. We see new businesses come to our state and existing ones expand here every day, making it as important as ever that our people are trained and ready to do any job they’re asked to do. The work Facebook will be doing in Greenville is important for the future prosperity of South Carolina, and we look forward to working with them.”

We’ll be visiting 30 US cities in 2018, and we’d love to hear how Facebook Community Boost can help people in your city or town. You can share your story at facebook.com/tellcommunityboost.

Scroll Up