Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

Intelligent Search: Video summarization using machine learning

Videos account for some of the richest and most delightful content on the web. But it can be difficult to tell which cat video is really going to make you LOL. That can be a frustrating and time-consuming process and it’s why we decided to help by building a smart preview feature to improve our search results and help our users more effectively find videos on the web. The idea is simple – you can hover on a video-result thumbnail and see a short preview of the video which tells you whether the video is the one you are looking for. You can try this out with a query on the video vertical – like funny cats.


The concept may be simple, but the execution is not. Video summarization is among the hardest technical challenges. Things that are intuitive to human beings like “the main scene” are inherently case-dependent and difficult for machines to internalize or generalize. Here’s how we use data and some machine learning magic to solve this technically challenging problem.

Overview

There are broadly two approaches towards video summarization: static and dynamic. Static summarization techniques try to find the important frames (images) from different parts of the video and splice them together in a kind of story-board. Dynamic summarization techniques divide the video into small video segments/chunks and try to select and combine the important segments/chunks to create a fixed-duration summary.

We chose the static approach for reasons of efficiency and utility. We had data indicating that over 80% of the viewers hovered on the thumbnail for less than 10 seconds (i.e. users don’t have the patience to watch long previews). We therefore thought, it would be useful to provide a set of four diverse thumbnails that could summarize the video at a single glance. There were UX constraints that kept us from adding too many thumbnails. In this way, our problem became selecting the most relevant thumbnail (hereinafter referred to as ‘primary thumbnail’) and selecting the four-thumbnail set to summarize a video.

Step One: Selecting the primary thumbnail

Here’s how we created a machine-learning pipeline for selecting the primary thumbnail of any video. First and foremost, you need labelled data, and lots of it. To teach our machines some examples of good and bad thumbnails, we randomly sample 30 frames (frame = still image) from the video and show it to our judges. The judges evaluate these frames using a subjective evaluation that considers attributes such as image quality, representativeness, attractiveness, etc. and assign each frame a label based on its quality as Good, Neutral, Bad (scored as 1,0.5,0). Point to note – our training data is not query specific, i.e. the judges are evaluating the thumbnail in isolation, and not in the context of the query. This training data, along with a host of features from these images (more on that in a bit) are used to train a boosted trees regression model that tries to predict the label on an unseen frame based on its features. The boosted trees model outputs a score between 0 and 1 that helps us decide the best frame that can be used as a primary thumbnail for the video.

What were the features that turned out to be useful in selecting a good thumbnail? As it turned out, core image quality features turned out to be very useful (i.e. features like the level of contrast, the blurriness, the level of noise, etc.). We also used sophisticated features powered by face-detection (# of faces detected, face size and position relative to the frame, etc). Also used were motion detection features and frame difference/frame similarity features. Visually similar and temporally co-located frames are grouped together into video sequences called scenes, and the scene length of the corresponding frame is also used as a feature – this turns out to be helpful in deciding whether the selected thumbnail is a good one. Finally, we also use deep neural networks (DNN) to train high-dimensional image vectors on the image quality labels and these vectors are used to capture the quality of the frame layout (factors like the zoom level [absence of extreme close ups and extreme zoom outs etc.]). The frame with highest predicted frame score is selected as the primary thumbnail to be shown to the user.

Here is a visual schematic:

Step Two: Selecting the remaining thumbnails for the video summary

The next step is to create a four-thumbnail set that provides a good representative summary of the video. A key requirement is comprehensiveness and it brings in many technical challenges. For instance, we could have simply taken the four frames with the highest scores from previous step and created a summary. But that won’t work in most cases because there’s a high chance that the four top-scored frames are from the exact same scene, and they don’t do a good job of summarizing the whole video. There are other problems too - from a computational cost point of view, it is impractical to evaluate all possible sets of four-frame candidates. Thirdly, it’s hard to collect training data from users about the four frames that best summarize a video, because, it is hard for users to select the 4 best frames from a video having thousands of frames. Here’s how we handle each of these problems.

To deal with the comprehensiveness, we introduce a similarity factor in the objective function. The new objective function for the expanded thumbnail set not only tries to maximize the total image quality score, but also adds an additional tuning parameter for similarity. The weight for this parameter is trained from user’s labelled data (more on that below). The similarity factor currently has a negative weight (i.e. a set of 4 high quality frames in which the frames are mutually diverse, will generally be considered a better summary than a corresponding set where the frames are similar).

We deal with computational complexity by formulating the problem as a greedy optimization problem. As stated before, it’s not possible to evaluate every possible combination of 4-frame summaries. Moreover, the best combination of 4 frames need not contain the primary thumbnail (it’s possible that the best combination excludes the primary thumbnail). But since we’ve already taken great pains to select the primary thumbnail, it can greatly simplify our task if we use this as a starting point to select just three more thumbnails that help maximize the total score. That’s greedy optimization.

Here’s how we generate training data for learning the weights for similarity and other features. We show judges a set of 4 frames on LHS and RHS (these frames are randomly selected from the video) and ask them to do a side-by-side judgment (label as “leftbetter”, “rightbetter”, or “equal”). This training data is then used to derive the thumbnail-set model by training the new objective function (total image quality score and similarity) for the 4-frame set. As it turned out, based on the training data, the weight for similarity is negative (i.e. in general, more visually diverse frame-sets lead to better summaries). That’s how we select the 4-thumbnail set.


Here are some examples that show the improved performance our new model over baseline methods available in static video summarization.



Outcome: Creating a playable preview video from 4 thumbnails

Of course with all the technical wizardry we can’t forget our main objective which is to generate a playable video clip from these 4 thumbnails to help our users better locate web videos. We do that by extracting a small clip around each of these four frames. How we find the boundaries to snip etc. is probably the subject of another blog. The result of all of this is a Smart Preview that helps users know what’s in the video. This means all of us can spend less time searching and more time watching the videos we want. As with earlier features like coding answer, the goal is to build an intelligent search engine and save users’ time. Try it out on our video vertical

 

Verifying your Google Assistant media action integrations on Android

Posted by Nevin Mital, Partner Developer Relations

The Media Controller Test (MCT) app is a powerful tool that allows you to test the intricacies of media playback on Android, and it's just gotten even more useful. Media experiences including voice interactions via the Google Assistant on Android phones, cars, TVs, and headphones, are powered by Android MediaSession APIs. This tool will help you verify your integrations. We've now added a new verification testing framework that can be used to help automate your QA testing.

The MCT is meant to be used in conjunction with an app that implements media APIs, such as the Universal Android Music Player. The MCT surfaces information about the media app's MediaController, such as the PlaybackState and Metadata, and can be used to test inter-app media controls.

The Media Action Lifecycle can be complex to follow; even in a simple Play From Search request, there are many intermediate steps (simplified timeline depicted below) where something could go wrong. The MCT can be used to help highlight any inconsistencies in how your music app handles MediaController TransportControl requests.

Timeline of the interaction between the User, the Google Assistant, and the third party Android App for a Play From Search request.

Previously, using the MCT required a lot of manual interaction and monitoring. The new verification testing framework offers one-click tests that you can run to ensure that your media app responds correctly to a playback request.

Running a verification test

To access the new verification tests in the MCT, click the Test button next to your desired media app.

MCT Screenshot of launch screen; contains a list of installed media apps, with an option to go to either the Control or Test view for each.

The next screen shows you detailed information about the MediaController, for example the PlaybackState, Metadata, and Queue. There are two buttons on the toolbar in the top right: the button on the left toggles between parsable and formatted logs, and the button on the right refreshes this view to display the most current information.

MCT Screenshot of the left screen in the Testing view for UAMP; contains information about the Media Controller's Playback State, Metadata, Repeat Mode, Shuffle Mode, and Queue.

By swiping to the left, you arrive at the verification tests view, where you can see a scrollable list of defined tests, a text field to enter a query for tests that require one, and a section to display the results of the test.

MCT Screenshot of the right screen in the Testing view for UAMP; contains a list of tests, a query text field, and a results display section.

As an example, to run the Play From Search Test, you can enter a search query into the text field then hit the Run Test button. Looks like the test succeeded!

MCT Screenshot of the right screen in the Testing view for UAMP; the Play From Search test was run with the query 'Memories' and ended successfully.

Below are examples of the Pause Test (left) and Seek To test (right).

MCT Screenshot of the right screen in the Testing view for UAMP; a Pause test was run successfully. MCT Screenshot of the right screen in the Testing view for UAMP; a Seek To test was run successfully.

Android TV

The MCT now also works on Android TV! For your media app to work with the Android TV version of the MCT, your media app must have a MediaBrowserService implementation. Please see here for more details on how to do this.

On launching the MCT on Android TV, you will see a list of installed media apps. Note that an app will only appear in this list if it implements the MediaBrowserService.

Android TV MCT Screenshot of the launch screen; contains a list of installed media apps that implement the MediaBrowserService.

Selecting an app will take you to the testing screen, which will display a list of verification tests on the right.

Android TV MCT Screenshot of the testing screen; contains a list of tests on the right side.

Running a test will populate the left side of the screen with selected MediaController information. For more details, please check the MCT logs in Logcat.

Android TV MCT Screenshot of the testing screen; the Pause test was run successfully and the left side of the screen now displays selected MediaController information.

Tests that require a query are marked with a keyboard icon. Clicking on one of these tests will open an input field for the query. Upon hitting Enter, the test will run.

Android TV MCT Screenshot of the testing screen; clicking on the Seek To test opened an input field for the query.

To make text input easier, you can also use the ADB command:

adb shell input text [query]

Note that '%s' will add a space between words. For example, the command adb shell input text hello%sworld will add the text "hello world" to the input field.

What's next

The MCT currently includes simple single-media-action tests for the following requests:

  • Play
  • Play From Search
  • Play From Media ID
  • Play From URI
  • Pause
  • Stop
  • Skip To Next
  • Skip To Previous
  • Skip To Queue Item
  • Seek To

For a technical deep dive on how the tests are structured and how to add more tests, visit the MCT GitHub Wiki. We'd love for you to submit pull requests with more tests that you think are useful to have and for any bug fixes. Please make sure to review the contributions process for more information.

Check out the latest updates on GitHub!

[Video] Unveil: How Samsung Puts Productivity at the Heart of the Galaxy Tab S4

In an age of iterative updates, isn’t it time your tablet evolved? Samsung is answering that call with the Galaxy Tab S4 to take the productivity, entertainment and connectivity experience to the next level. With Samsung DeX on tablets for the very first time, a refined S Pen, an immersive display optimized for entertainment, and the ability to offer true-to-life audio quality, this isn’t your traditional tablet—it’s a leap forward in power and productivity so you can get more done wherever you are.

 

Built for the digital savvy, tech-forward professional, the Galaxy Tab S4 introduces a range of new updates that will change the way you interact with your portable devices. Check out the video below as Hassan Anjum and Jonathan Wong, directors of product marketing at Samsung Electronics, walk you through the latest addition to the Samsung Galaxy family.

 

An Update on Video Retention Metrics

Video creators on Facebook have a range of metrics available to help them understand the reach, engagement, and overall performance of the videos they share. As more publishers and creators are sharing longer videos on our platform, it’s becoming increasingly important to better understand audience retention — the metric that shows how well a video is holding the attention of viewers.

Today we’re introducing improvements to the video retention graph available to Pages in Video Insights. We want to make this visualization more useful for video creators to help them better understand how their audience is consuming their longer videos, so we’re providing new breakdowns and insights.

In the coming weeks, Page admins will be able to access the following new metrics in their video retention graph:

  1. Followers vs Non-Followers: Breakdown of audience retention by people who follow your Page and people who don’t follow your Page.
  2. Audience Demographics: Breakdown of audience retention by gender.
  3. Zoom Chart: Zoom into the chart to get a closer look at the data, so you can better visualize the engagement throughout the video to see how key moments affected viewership.

In addition to these improvements, we’ve also made a fix to the video retention graph. While the absolute retention data available to publishers and creators was accurate, in some cases the retention graph rendered inaccurately for videos longer than two minutes. This was the result of a bug that we have fixed, and the retention graph is now accurate. We apologize for the error.

We know that publishers and creators use the metrics and insights we provide to inform their strategy and understanding, and we are committed to continually improving the functionality, reliability, and accuracy of our metrics.

Best practices and updates on video and monetization

By: Nick Grudin, VP of Media Partnerships & Maria Angelidou-Smith, Product Management Director

We have previously shared that Facebook is prioritizing content that encourages meaningful interactions between people and videos that people seek out and return to regularly. Today we are sharing early best practices on how creators and publishers can align to these priorities while providing detail on some of the updates we are making to help content partners monetize this type of content.

Best practices for shows and videos
An engaged and loyal audience has both a meaningful connection to your content and to fellow viewers. This type of audience has a direct correlation with monetization – as their intent to engage with your video content grows, so too can your monetization opportunities. Some best practices for how to drive this kind of viewing behavior include:

  • Build audiences on Facebook surfaces where people seek out content – Encourage audience engagement outside of News Feed on surfaces that support repeat, loyal viewership such as in Watch, on a Page or in a Group. These places allow for audiences to meaningfully interact with each other to build community around your content.
  • Set and fulfill the creative expectations of viewers A consistent voice and format drives repeat viewing and longer view times. Some successful formats that foster communities of fans around content include serialized shows or videos with a predictable cast and format. For example, Crypt Monsters by Crypt TV explores a new monster every week, Discovery Twins follows the adventures of Ava and Alexis McClure and Everything Explained delves into the science behind everyday occurrences. In each case, the audience knows what to expect with each new video and are more likely to return and view more episodes.
  • Establish a release cadence – A set publishing schedule encourages audiences to consistently return to watch the next episode. Posting related videos, photos, or text posts helps to keep your fans engaged between episodes and seasons. For example, new episodes of Tia Mowry’s Quick Fix post every Friday and often have thousands of views within a few hours because many of her followers anticipate kicking off their weekends with her lifestyle tips.
  • Create an active experience – Sourcing topics from audiences and engaging with commenters draws the audience closer to the content. For example, Riddle Me This, an interactive brain teaser show, sources potential brain teasers from its 60,000+ fan group. And taking a unique spin on the traditional ‘sports talk show’, ESPN First Take has hosts set a weekly topic and invite fans, via the official group, to submit their own video commentary on that topic with the fan from the top submission joining Friday’s episode to debate directly with an ESPN host.

Monetizing Video Content
We are focused on growing payouts for creators and publishers who develop engaged and loyal audiences and are working on growing payouts for partners who develop loyal, engaged viewing.

In our December 2017 announcement, we focused Ad Break eligibility on shows and longer videos. We continue to invest in new formats and tools designed to help maximize payouts for eligible videos that people value:

  • Pre-Roll: Earlier this year we began testing pre-roll in Watch. We have seen promising signs, so we are expanding testing to places where people seek out videos, like in search results or on a Page timeline. For example, if a person searches for a show, a pre-roll may play when they select the episode to watch.
  • Preview Trailers: We continue to explore ad formats that can better reach people intending to engage with content. We will test a show ‘preview’ trailer format that helps people discover episodes in News Feed. When a viewer taps on the trailer, we’ll play a short ad before moving them to view the full episode in Watch. Partners will also be able to boost this format, reaching new audiences and driving more predictable tune-in while still being able to monetize.
  • Ad Breaks Auto Insertion: We recently introduced a feature that automatically detects the ideal place for an ad break within an eligible video. A well-placed ad break can have a big impact on publisher payouts.
  • Pre-Publish Brand Safety Check: We have also introduced a feature enabling content partners to submit videos for monetization eligibility review before posting, ensuring the video will receive ad opportunities for all of its distribution.

Removing incentives from content that creates less value for people: As part of these efforts, we are clarifying policies and updating our Monetization Eligibility Standards and Content Guidelines for Monetization to make it clear the types of programming and distribution practices that will not be supported. Enforcement will be rolled out in phases so content partners can adapt. Eventually, however, repeat abuse could result in losing access to monetization features altogether:

  • Manufactured sharing and distribution schemes: Content partners with paid arrangements for Pages to methodically and inorganically share videos can no longer monetize views originating on the third party Pages. This behavior optimizes for distribution rather than quality and does not build deep relationships between people and content. It aligns with our recent Branded Content policy update, prohibiting Pages and Profiles from accepting payment to share content they did not have a hand in creating.
  • Formats unsuitable for an ad: When content partners use video formats that aren’t actually video – like static or minimal movement videos or content that just loops – they are creating experiences not intended for ad break monetization. People do not expect to see ads in this type of content, and this is not the type of content advertisers want to run ads in.
  • Limited editorialization of content: Pages primarily distributing videos of repurposed clips from other sources with limited editorialization do not foster engaged, loyal communities in the way that Pages that produce and publish original, thematic or episodic videos do. While we will not be taking immediate enforcement action on this issue, we want to signal to content producers that this is a programming style we will more deeply evaluate over the coming weeks and months to assess what level of distribution and monetization matches the value created for people.

Looking Forward
We know we have a long way to go, but we remain grateful to our partners who continue to create real value for their audiences by developing quality content and building engaged communities. We are focused on delivering new capabilities designed to better serve them.

For content partners seeking further detail on best practices, additional information provided here.

Millions shared New Year’s Eve moments with friends and family on Facebook Live

By Erin Connolly, Product Manager, and Kunal Modi, Engineering Manager

New Year’s Eve is a time for reflection and celebration with friends and family. More than 10 million people around the world went live on Facebook to share their New Year’s Eve moments with their communities.

The night topped last year’s live broadcast activity, with people sharing 47% more live videos than last year.

When you go live you can share your experiences with people you care about. With Facebook Live, people can still be in the same moment even if they aren’t in the same place.

People were excited to count down to 2018 with friends — wherever they were. We saw more than 3 times as many broadcasts with a friend on New Year’s Eve compared to an average day in December, making it the biggest day so far for Live With.

The video shows Facebook data on live broadcasts per minute as midnight struck around the world. Each small blip on the globe represents 100 live videos, each large burst represents 1,000 live videos. Countries are color-coded based on the percentage of users that posted live videos.

 

Wherever you were and however you celebrated, all of us at Facebook wish you a happy New Year!

Facebook_Live_NYE_data_Final

Updates to Video Distribution and Monetization

By: Maria Angelidou-Smith, Product Management Director & Abhishek Bapna, Product Manager

Facebook is home to a wide variety of publishers and creators who make videos that connect people, spark conversation and build community. Today we’re sharing updates on video distribution and our efforts to build effective video monetization tools for our partners that complement great viewing experiences for people:

  • Video distribution: Updating News Feed ranking to improve distribution of videos that people actively want to watch — for example, videos from Pages that have strong repeat viewership.
  • Ad Breaks: Improving the viewing experience for people by updating our guidelines for Ad Breaks, and providing new metrics for publishers and creators to understand how their Ad Breaks perform.
  • Pre-roll: Testing pre-roll ads in places where people intentionally go to watch videos, like Watch.

VIDEO DISTRIBUTION

Today, the majority of video discovery and watch time happens in News Feed. News Feed provides a great opportunity for publishers and creators to reach their audience, drive discovery, and start to build deeper connections for their content.

We are updating News Feed ranking to improve distribution of videos from publishers and creators that people actively want to watch. With this update, we will show more videos in News Feed that people seek out or return to watch from the same publisher or creator week after week — for example, shows or videos that are part of a series, or from partners who are creating active communities. Engaging one-off videos that bring friends and communities together have always done well in News Feed and will continue to do so.

We also want to make it easier for show creators to reach their existing community. For example, if your Show Page is linked to your existing Page, we will enable you to distribute episodes directly to all your followers. This will make it easier for show creators to grow an audience for new shows, and for people to connect with content they may be interested in. This is something many show creators have been asking for, and we will continue to improve the experience of creating and managing shows.

While News Feed will remain a powerful place for publishers and creators to grow and connect with their audience, over time we expect more repeat viewing and engagement to happen in places like Watch. The Discover tab in Watch will also prioritize shows that people come back to. As new shows build audiences, places like Watch are well suited for people to predictably catch up on the latest episodes and content from their favorite publishers and creators, and engage in a richer social viewing experience – for example, connecting with other fans in a dedicated Facebook Group for the show.

MONETIZATION SOLUTIONS

We are updating our monetization features to support the different types of video viewing experiences on our platform.

Branded Content

Branded content is a valuable way for publishers to generate more revenue and will continue to be available to all types of video on Facebook. We’ve seen a wide range of publishers and creators find success with branded content in News Feed, extending sponsorships or partnerships onto our platform in creative ways. Since the beginning of this year, the number of publishers and creators posting branded content each month has increased by 4x, driven in part by opening availability to all Pages.

Ad Breaks

Viewers tell us they prefer it when the video they are watching “merits” an Ad Break — for example, content they are invested in, content they have sought out, or videos from publishers or creators they care about and are coming back to. These videos tend to be longer, with more narrative development.

As a result, starting in January we will focus the expansion of Ad Breaks on shows, and Ad Break eligibility will shift to videos and episodes that are at least three minutes long, with the earliest potential Ad Break at the one minute mark. Previously, videos in the test were eligible for Ad Breaks if they were a minimum of 90 seconds, with the first Ad Break able to run at 20 seconds.

Our consumer research showed that moving from 90 second to three minute videos with Ad Breaks improved overall satisfaction. Furthermore, across initial testing, satisfaction increased 18% when we delayed the first Ad Break placement. Viewer satisfaction numbers are typically difficult to lift, so this indicates a positive shift — increasing the likelihood people will continue watching the content through the break.

To help creators and publishers better understand the performance of their Ad Breaks, we also recently introduced improvements to the metrics we provide. We added a dedicated Ad Break insights tab, so creators and publishers can view their video monetization performance in a dedicated place, separate from their video metrics. We also added two new metrics: Ad Break impressions at the video level and Ad Break CPMs at the video level.

Finally, we are updating the Live Ad Breaks test. The test will no longer support Profiles, and will only support Pages with more than 50k followers. We’re making these changes because we’ve found Profiles and Pages below this threshold are more likely to share live videos that fail to comply with our Content Guidelines for Monetization. Live video publishers below this threshold also tend to have smaller audiences for their broadcasts, and therefore aren’t able to garner meaningful revenue from Ad Breaks. We’ll continue to work jointly with our partners on testing and improving the product to deliver value.

Pre-roll

Next year, we will begin testing pre-roll ads in places where people proactively seek out content, like Watch. While pre-roll ads don’t work well in News Feed, we think they will work well in Watch because it’s a place where people visit and come back to with the intention to watch videos. We’ll start with 6-second pre-roll with the goal of understanding what works best for different types of shows across a range of audiences.

We’re excited about the future, and we’re committed to providing our community of publishers and creators with the solutions they need to build a thriving business on our platform.

Adding Highlighted Shares to Video Insights for Pages

Today we’re adding Highlighted Shares to Video Insights for Pages, a new feature that will give publishers and creators more information about the top Pages that are re-sharing their videos.

Available to all Pages globally, Highlighted Shares showcases the top five Pages that have re-shared a video, ranked by views. The video publisher will also be able to see associated insights from re-sharers, like post engagement and average watch time.

Video publishers have requested more information about where people are watching and engaging with their videos to help inspire future collaborations with other Pages. We hope this update will better inform video publishers about how their videos are performing across Facebook, and enable them to connect with other Pages to build community.

Highlighted Shares

New Rights Manager Integrations

By Xiaoyin Qu, Product Manager

We launched Rights Manager last year to help rights owners protect their video content at scale, and have continued to improve the tool. We’ve heard from rights owners that they’d like more ways to access Rights Manager capabilities, so today we are announcing new third-party integrations.

Three service providers — Friend MTS, MarkMonitor, and ZEFR — are integrating with the Rights Manager API to provide rights management on Facebook as a service. These companies are proven industry leaders in the rights management space.

We want to give rights owners access to Rights Manager in the ways that make the most sense for their business. Currently, rights owners can use Rights Manager through Page Publishing Tools and by integrating their applications with Rights Manager via the API. Today’s integrations expand access to Rights Manager functionality by allowing rights owners to work with service providers to help manage their intellectual property if that’s their preferred option.

If you’re already using Rights Manager and you’d like to work with one of these providers, you can reach out to the company directly. If you’re a rights owner not currently enrolled in Rights Manager, you can first apply for access here, and then reach out to the company of your choice. These integrations will be available in the coming weeks.

If you’re a service provider that is interested in integrating with the Rights Manager API, contact us here.

Introducing Updates to the Live API

By Supratik Lahiri, Product Manager and Chris Tiutan, Product Marketing Manager

Since launch, we’ve seen many publishers use the Live API to deliver professional-quality live video experiences to their audiences.

As we work to improve the Live API experience, today we’re introducing new tools that help publishers and developers create more seamless Facebook Live broadcasts.

Updates to the Facebook Live API include:

Automated encoder configuration
Publishers have given us feedback that when going live through the Live API it can be challenging to constantly fine tune their encoder settings to ensure consistent, stable Facebook live streams.

With this update, the Live API now provides preview, start, and stop states for your encoder, and automatically configures it to the optimal settings for a Facebook Live broadcast, so you can be confident that you’re broadcasting the highest quality Live video to Facebook every time without having to manually adjust settings before every broadcast.

Frame-accurate start times
We know having a strong start to a broadcast is important, and we’ve heard from publishers that it can be confusing to determine exactly when a Live API broadcast is in fact live to viewers.

With this update, developers can now indicate the exact frame and moment to go live. One great use case for this is to implement a countdown clock that signals to the broadcaster the exact moment when a stream is live.

Publishers that programmatically connect to the Live API can enable these new features within their broadcast setup. Publishers can also work with a video solutions provider who has incorporated these updates. We’ve been working with Wowza to test these updates, which are now available in their live video encoding and delivery solution, ClearCaster.

We’re excited that all developers will be able to integrate these updates to the Live API and make their Facebook Live broadcasts even better and more seamless.

Developers can learn how to integrate these updates to the Live API from our Live API documentation.

Standards and Guidelines for Earning Money from Your Content on Facebook

By Nick Grudin, VP of Media Partnerships

Every day, people come to Facebook to connect with stories from creators and publishers they love. Fostering an ecosystem where creators and publishers of all sizes can connect with their fans and earn money for their work is a critical part of creating these connections and experiences for our community.

We want to support a diverse range of creators and publishers, which is why we’ve introduced a range of monetization options, including Branded Content and Instant Articles. More recently, we’ve been testing Ad Breaks with a group of publishers, and we’re working on opening it up more broadly.

As we continue to expand our monetization offerings, it’s important that we provide clear guidelines around what can and cannot be monetized on our platform. Many of these experiences are made possible through ads from over 5M advertisers on Facebook, and they need to feel confident and in control over where their ads appear.

That’s why today, we are introducing monetization eligibility standards. These standards provide clearer guidance around the types of publishers and creators that are eligible to earn money on Facebook, along with guidelines on the kind of content that can be monetized. We have similar standards for publishers monetizing their own sites and apps through the Audience Network — learn more here.

Standards on who can access monetization features on Facebook

To use any of our monetization features, you must comply with Facebook’s policies and terms, including our Community Standards, Payment Terms, and Page Terms. Our goal is support creators and publishers who are enriching our community. Those creators and publishers who are violating our policies regarding intellectual property, authenticity, and user safety, or are engaging in fraudulent business practices, may be ineligible to monetize using our features.

Creators and publishers must have an authentic, established presence on Facebook — they are who they represent themselves to be, and have had a profile or Page on Facebook for at least one month. Additionally, some of our features like Ad Breaks require a sufficient follower base, something that could extend to other features over time.

Those who share content that repeatedly violates our Content Guidelines for Monetization, share clickbait or sensationalism, or post misinformation and false news may be ineligible or may lose their eligibility to monetize.

Guidelines on what content can be monetized on Facebook

These guidelines provide more detail on the types of content that advertisers may find sensitive, and should help you make more informed decisions about what content to monetize. These apply to videos on Facebook today, and will extend to Instant Articles over time.

While the guidelines do not cover every scenario, they are a good indicator of what types of content are likely to generate more revenue. Keep in mind that even if your content is eligible for ads, some brands and advertisers may choose to use brand safety controls to tailor where their ads run.

If your content does not comply with these standards, we will notify you that we have removed the ads. If you believe your content should be eligible, you can reach out through the appeals channel.

These guidelines focus specifically on what content is eligible for ads. Your content may be impacted by these guidelines, but will remain on the platform provided it meets our Community Standards.

***

We hope these standards and guidelines help you understand how to successfully earn money from your content on Facebook. This is part of our ongoing process to provide our partners with more clarity and transparency, and we remain committed to improving our products and experiences for people, publishers, creators and advertisers.

Introducing Watch and Shows on Facebook

By Nick Grudin, VP Media Partnerships

Today we announced Watch, a new platform for shows on Facebook. More information can be found here.

Watch is comprised of shows, a new type of video on Facebook. Shows are made up of episodes – live or recorded – that follow a consistent theme or storyline. Shows are a great format if you want to share a video series, like a weekly cooking show, a daily vlog, or a set of videos with recurring characters or themes.

Our goal is for Watch to be a platform for all creators and publishers to find an audience, build a community of passionate fans, and earn money for their work.

Initially, Watch will be available to a limited group of people in the U.S. on mobile, desktop, and our TV apps, before we make it available to more people in the U.S. in the coming weeks. Because we’re early with Watch, we’re starting by testing with a limited group of publishers and creators who are making shows. We are also funding some shows to help seed the ecosystem, gather feedback, and inspire others.

SHOW PAGES

We’re also introducing Show Pages to make it seamless to create a Facebook show and publish new episodes. Show Pages are organized in a way that makes it easy for people to understand what a show is all about, watch episodes and other related videos, and connect with communities that have formed around a show.

We think creating a show has a number of benefits, like the ability to reach a predictable and loyal audience. People will be able to follow the shows they like, and when there’s a new episode of a show, Facebook will inform the show’s followers and the episode will automatically appear in their Watchlist in Watch.

Over time, creators will be able to monetize their shows through Ad Breaks. We’ve been testing Ad Breaks over the past few months, and we will be slowly opening up availability to more creators to ensure we’re providing a good experience for the community. Creators can also create sponsored shows using our branded content tag.

If you would like to register your interest in creating a show, please visit this link. We’re excited to see how creators and publishers use shows to connect with their fans and community.

Scroll Up