Today, we’re making an update to News Feed ranking that will help surface videos people are proactively seeking out and coming back to on Facebook. This change takes two factors into account:
Intent matters. With this update, videos from Pages that people proactively seek out, by using Search or going directly to a Page for example, will see greater distribution on News Feed.
Repeat viewership matters. Also with this update, we will show more videos in News Feed that people return to watch from the same publisher or creator week after week.
What this means for my Page As we’ve said, watching video on Facebook has the power to drive conversations, and News Feed remains a place people discover and watch videos. Engaging videos that not only bring people together, but drive repeat viewership and engagement, will do well in News Feed.
For more on this update, you can visit the Media Blog.
Jump, Google’s platform for virtual reality video capture that combines high-quality VR cameras and automated stitching, simplifies VR video production and helps filmmakers of all backgrounds and skill levels create amazing content. For the past two years, we’ve worked with NFL Films, one of the most recognized team of filmmakers in sports and the recipient of 112 Sports Emmys, to show what some of the best creators could do with Jump. Last year they debuted the first season of the virtual reality docuseries “Immersed,” and today the first three episodes of season two land on Daydream through YouTube VR and the NFL’s YouTube channel. This season will give fans an even more in-depth look at some of the NFL’s most unique personalities through three multi-episode arcs, each dedicated to a different player.
Shot with the latest Jump camera, the YI HALO, the first three episodes follow Chris Long, defensive end for the Philadelphia Eagles. Each episode gives fans a sneak peek into his life on and off the field, from his decision to donate his salary to charity to a look at how he prepares for game day. They’re available on Daydream through YouTube VR and the NFL’s YouTube channel today, with future episodes featuring Calais Campbell of the Jacksonville Jaguars and players from the 2018 Pro Bowl coming soon.
We caught up with NFL Films Senior Producer Jason Weber to hear more about season two, what it was like to use Jump and advice for other filmmakers creating VR video content for the first time:
What makes season two of “Immersed” different from the first season?
For season two of NFL “Immersed,” we wanted to try and dig a bit deeper into the stories of our players and give fans a real sense of what makes them who they are on and off the field, so we’re devoting three episodes to each subject.
VR is such a strong vehicle for empathy, and we wanted to focus the segments on players who are making a difference on and off the field. Chris Long is having a tremendous season with the Eagles as part of one of the best defenses in football, but his impact off the field is equally inspiring. Calais Campbell is a larger-than-life character whose influence is being felt on the resurgent Jaguars and throughout his new community in Jacksonville. And the Pro Bowl is a unique event where all of the best players come to have fun, and the relaxed setting gives us a chance to put cameras where they normally can’t go, giving viewers a true feeling of what it’s like to play with the NFL’s finest.
Last year was NFL Films’ first foray into shooting content in VR. What was it like filming and producing season one, and how did it compare to your experience with season two this year?
We learned a lot last season; in particular, the challenges of bringing multiple VR cameras to the sidelines on game day. As fast as the game looks on TV, it moves even faster when you’re right there on the field. Being able to get the footage we need, while also being ready to get out of the way when a ball or player is coming right at you took some time to master.
What makes shooting for VR different from traditional video content? What considerations do you have to make when shooting in VR?
Camera position is one big difference in shooting VR versus traditional video content. When we shoot in traditional video formats our cinematographers are constantly moving to capture different angles and frames of our subjects and scenes. With VR—though we've noticed a slight shift toward more cuts and angles in edited content in the past year—letting a scene play longer from one angle and positioning the camera so that the action takes advantage of the 360-degree range of vision helps differentiate a VR production from a standard format counterpart.
What did you like about using the Yi Halo to shoot the second season of “Immersed?”
With the Halo, we were most excited about the Up camera. You might not think that a camera facing straight up would make that much of a difference in football, but there’s a lot happening in that space that would get lost without it. We can now place a camera in front of a quarterback and have him throw the ball over the Halo, giving a viewer a more realistic view of that scene. With field goals, placing the camera under the goal posts produces a very interesting visual that wouldn’t work if the top camera wasn’t able to capture the ball going through the uprights. One of the most goosebump-inducing moments at any NFL game is a pregame flyover, which we can now capture in its full glory thanks to the top camera.
What tips do you have for other filmmakers thinking of getting into making VR video content?
Take the time to consider why you want to use VR versus traditional formats to tell your story. I work in both formats and feel that if I’m just telling the same story in VR that I would in HD, then I’m not doing my job as a VR filmmaker. VR gives you the unique opportunity to tell a story in a 360-degree space. Use that space to your advantage in creating something memorable.
Grab your Daydream View and head to YouTube today to watch the first three episodes, and be sure to check back soon to see the rest of season two of “Immersed.”
Editor’s Note: Today’s post is from Becky Torkelson, Computer Support Specialist Leader for Scheels, an employee-owned 27-store chain of sporting goods stores in the Midwest and West. Scheels uses Chrome browser and G Suite to help its 6,000 employees better serve customers and work together efficiently.
Whether customers come to Scheels stores to buy running shoes, fishing rods or camping stoves, they talk to associates who know the products inside and out. We hire people who are experts in what they’re selling and who have a passion for sports and outdoor life. They use Chrome browser and G Suite to check email and search for products from Chromebooks right on the sales floor, so they can spend more time serving customers.
That’s a big improvement over the days when we had a few PCs, equipped with IBM Notes and Microsoft Office, in the back rooms of each store. Associates and service technicians used the PCs to check email, enter their work hours or look up product specs or inventory for customers—but that meant they had to be away from customers and off the sales floor.
Starting in 2015, we bought 100 Chromebooks and 50 Chromeboxes, some of which were used to replace PCs in store departments like service shops. Using Chromebooks, employees in these departments could avoid manual processes that slowed down customer service in the past. With G Suite, Chrome devices and Chrome browser working together, our employees have access to Gmail and inventory records when they work in our back rooms. They can quickly log on and access the applications they need. This means they have more time on the sales floor for face-to-face interaction with customers.
Our corporate buyers, who analyze inventory and keep all of our stores stocked with the products we need, use Google Drive to share and update documents for orders instead of trading emails back and forth. We’re also using Google Sites to store employee forms and policy guides for easy downloading—another way people save time.
We use Chrome to customize home pages for employee groups, such as service technicians. As soon as they log in to Chrome, the technicians see the bookmarks they need—they don’t have to jump through hoops to find technical manuals or service requests. Our corporate buyers also see their own bookmarks at login. Since buyers travel from store to store, finding their bookmarks on any computer with Chrome is a big time-saver.
Our IT help desk team tells me that they hardly get trouble tickets related to Chrome. There was a very short learning curve when we changed to Chrome, an amazing thing when you consider we had to choose tools for a workforce of 6,000 people. The IT team likes Chrome’s built-in security—they know that malware and antivirus programs are running and updating in the background, so Chrome is doing security monitoring for us.
Since Scheels is employee-owned, associates have a stake in our company’s success. They’re excited to talk to customers who want to learn about the best gear for their favorite sports. Chrome and G Suite help those conversations stay focused on customer needs and delivering smart and fast service.
The end of the year is fast approaching, but the fun doesn’t have to end after the ball drops in Times Square. When you’re ready to kick off your travel plans for 2018 and take a weekend getaway, check out our trending destinations for travel inspiration, and our new features to feel confident you’re getting a good deal.
Get tips when the price is right
Long weekends are a great excuse to escape to warmer weather, but worrying about getting the best price for your vacation can be stressful. A recent study we did indicated that travelers are most concerned about finding the best price for their vacations – more than with any other discretionary purchase.
Google Flights can help you get out of town, even when you're on a budget. Using machine learning and statistical analysis of historical flights data, Flights displays tips under your search results, and you can scroll through them to figure out when it’s best to book flights. Say you were searching for flights to Honolulu, and flights from your destination were cheaper than usual. A tip would say that “prices are less than normal” and by how much to indicate you’d spotted a deal. Or, if prices tend to remain steady for the date and place you’re searching for, a tip would indicate the price “won’t drop further” based on our price prediction algorithms.
Similarly, when you search for a hotel on Google, a new tip will appear above results when room rates are higher than usual, or if the area is busier than usual due to a holiday, music festival, or even a business conference. So if you're planning a trip to San Francisco or Las Vegas, you can make sure you're avoiding dates when big conferences are scheduled and hotel prices tend to be high.
If you prefer to wait and see if prices drop, you can now get email price alerts by opting into Hotel Price Tracking on your phone—this will roll out on desktop in the new year.
See the sights without breaking the bank
Vacation time is precious, and once you book your flight and hotel and arrive at your destination, it’s time to have some fun. Google Trips’ new Discounts feature helps you instantly access deals for ticketing and tours on top attractions and activities. Book and save on a tour of the Mayan ruins near Cancun, or get priority access to the top of the Eiffel Tower in Paris. No matter where you’re headed (and if you need ideas, read on), Trips makes it easy to browse and access fun stuff to do on your vacation without breaking the bank.
Head to the beach for MLK Weekend
People are already searching for flights for Martin Luther King Jr. weekend, from January 12th to 15th. The top trending domestic destinations for MLK Weekend offer a warm climate—with Florida and Hawaii taking the lead. For folks heading out of the country, Cancun and Bangkok are top beach destinations, whereas Rome and Tokyo are top cultural destinations.
Pick a tropical island or go across the pond for Presidents’ Day
Presidents’ weekend is right on the heels of Valentine’s Day next year, so it’s easy to take time off to spend time with that special someone, celebrate singledom with friends, or maybe just treat yourself to a solo adventure. Tropical islands are the most popular for a domestic getaway, with three of Hawaii’s major islands—Oahu, Maui, and Kauai—all trending in flight searches. For international flight searches, Cancun and Bangkok still top the list, but classic European cities like Paris, Rome, and Barcelona are also climbing in popularity.
Amidst celebrations with friends and family this December, start dreaming about your next winter getaway in the new year. We’ll help you get there.
Facebook is home to a wide variety of publishers and creators who make videos that connect people, spark conversation and build community. Today we’re sharing updates on video distribution and our efforts to build effective video monetization tools for our partners that complement great viewing experiences for people:
Video distribution: Updating News Feed ranking to improve distribution of videos that people actively want to watch — for example, videos from Pages that have strong repeat viewership.
Ad Breaks: Improving the viewing experience for people by updating our guidelines for Ad Breaks, and providing new metrics for publishers and creators to understand how their Ad Breaks perform.
Pre-roll: Testing pre-roll ads in places where people intentionally go to watch videos, like Watch.
Today, the majority of video discovery and watch time happens in News Feed. News Feed provides a great opportunity for publishers and creators to reach their audience, drive discovery, and start to build deeper connections for their content.
We are updating News Feed ranking to improve distribution of videos from publishers and creators that people actively want to watch. With this update, we will show more videos in News Feed that people seek out or return to watch from the same publisher or creator week after week — for example, shows or videos that are part of a series, or from partners who are creating active communities. Engaging one-off videos that bring friends and communities together have always done well in News Feed and will continue to do so.
We also want to make it easier for show creators to reach their existing community. For example, if your Show Page is linked to your existing Page, we will enable you to distribute episodes directly to all your followers. This will make it easier for show creators to grow an audience for new shows, and for people to connect with content they may be interested in. This is something many show creators have been asking for, and we will continue to improve the experience of creating and managing shows.
While News Feed will remain a powerful place for publishers and creators to grow and connect with their audience, over time we expect more repeat viewing and engagement to happen in places like Watch. The Discover tab in Watch will also prioritize shows that people come back to. As new shows build audiences, places like Watch are well suited for people to predictably catch up on the latest episodes and content from their favorite publishers and creators, and engage in a richer social viewing experience – for example, connecting with other fans in a dedicated Facebook Group for the show.
We are updating our monetization features to support the different types of video viewing experiences on our platform.
Branded content is a valuable way for publishers to generate more revenue and will continue to be available to all types of video on Facebook. We’ve seen a wide range of publishers and creators find success with branded content in News Feed, extending sponsorships or partnerships onto our platform in creative ways. Since the beginning of this year, the number of publishers and creators posting branded content each month has increased by 4x, driven in part by opening availability to all Pages.
Viewers tell us they prefer it when the video they are watching “merits” an Ad Break — for example, content they are invested in, content they have sought out, or videos from publishers or creators they care about and are coming back to. These videos tend to be longer, with more narrative development.
As a result, starting in January we will focus the expansion of Ad Breaks on shows, and Ad Break eligibility will shift to videos and episodes that are at least three minutes long, with the earliest potential Ad Break at the one minute mark. Previously, videos in the test were eligible for Ad Breaks if they were a minimum of 90 seconds, with the first Ad Break able to run at 20 seconds.
Our consumer research showed that moving from 90 second to three minute videos with Ad Breaks improved overall satisfaction. Furthermore, across initial testing, satisfaction increased 18% when we delayed the first Ad Break placement. Viewer satisfaction numbers are typically difficult to lift, so this indicates a positive shift — increasing the likelihood people will continue watching the content through the break.
To help creators and publishers better understand the performance of their Ad Breaks, we also recently introduced improvements to the metrics we provide. We added a dedicated Ad Break insights tab, so creators and publishers can view their video monetization performance in a dedicated place, separate from their video metrics. We also added two new metrics: Ad Break impressions at the video level and Ad Break CPMs at the video level.
Finally, we are updating the Live Ad Breaks test. The test will no longer support Profiles, and will only support Pages with more than 50k followers. We’re making these changes because we’ve found Profiles and Pages below this threshold are more likely to share live videos that fail to comply with our Content Guidelines for Monetization. Live video publishers below this threshold also tend to have smaller audiences for their broadcasts, and therefore aren’t able to garner meaningful revenue from Ad Breaks. We’ll continue to work jointly with our partners on testing and improving the product to deliver value.
Next year, we will begin testing pre-roll ads in places where people proactively seek out content, like Watch. While pre-roll ads don’t work well in News Feed, we think they will work well in Watch because it’s a place where people visit and come back to with the intention to watch videos. We’ll start with 6-second pre-roll with the goal of understanding what works best for different types of shows across a range of audiences.
We’re excited about the future, and we’re committed to providing our community of publishers and creators with the solutions they need to build a thriving business on our platform.
Traveling on a bus or train is the time for you to do your best music-listening, news-reading, and social-media scrolling ... as long as you don't miss your stop.
A new feature on Google Maps for Android keeps you on track with departure times, ETAs and a notification that tell you when to transfer or get off your bus or train. And you can track your progress along the way just like you can in driving, walking or biking directions.
To check out the new feature, head into Google Maps. Type your destination, select transit directions, then choose your preferred route. Tap the “Start” button to get on your way (and you won’t miss your stop this time).
Samsung Electronics today announced the new Samsung Notebook 9 Pen and three new versions of the Samsung Notebook 9 (2018), offering a mobile computing experience that matches how people are using their PCs: whether at work, on the go and everything in between. Designed to deliver the best in mobility, these four new notebooks ensure a computing experience that is powerful, portable, connected and secure. When paired with the latest productivity tools and services available from Samsung, the Notebook 9 Pen and Notebook 9 (2018) are the perfect complement to match a lifestyle where work or play can happen anywhere and at any time.
“At Samsung, we have long felt that technology must fit within the lives of the consumer, not the other way around. Over the past several years, the lines between our personal and professional lives have started to blur creating the need for technology that allows us to connect, collaborate and share from anywhere,” said YoungGyoo Choi, Senior Vice President of the PC Business Team, Mobile Communications Business at Samsung Electronics. “The new Samsung Notebook 9 Pen and enhanced Samsung Notebook 9 (2018) and Samsung Notebook 9 Pen offer our customers premium, powerful and portable devices that provide the tools to securely work from anywhere, breaking the boundaries of previously accepted standards for what a notebook should be.”
Samsung Notebook 9 Pen: Superb Flexibility, Portability with S Pen and Convertible Hinge
The Notebook 9 Pen brings a thoughtfully refined design to the 2-in-1 PC. Its full metal chassis with premium magnesium aluminum – lighter than aluminum – alloy called Metal12 provides durability while keeping it super light at 2.2 pounds. The 360 degree hinge provides flexibility for users to convert to a tablet from a laptop with ease by rotating the keyboard behind the screen for convenience.
The Notebook 9 Pen provides ultimate convenience with a built-in, refined S Pen that gives users the freedom to doodle, write, sketch, paint and more. The S Pen is battery free, built into the device and designed for immediate use. The S Pen can recognize 4,096 levels of pressure with a fine 0.7mm tip and convenient tilt detection to allow a more natural writing and drawing experience. When the S Pen is removed from the notebook, Air Command automatically launches providing convenient S Pen shortcuts, Samsung Notes and Autodesk Sketchbook, so that users can write, draw and create immediately.
Samsung Notebook 9 (2018): Powerful Design, yet Lightweight and Thin
With three different versions offering a 13.3” to a 15” screen and enhanced graphics capabilities, the Notebook 9 (2018) weighs up to 2.84 pounds and measures 15.4mm, one of the lightest and thinnest notebook devices in its class. The bezel around the screen measures 6.2mm providing a more immersive experience. This means great portability without sacrificing screen size, as well as a stronger, more durable metal body featuring Metal12 which provides durability while keeping it super light.
The device also provides always-on power thanks to the 75Wh Hexacell battery, Samsung’s largest and most powerful battery placed inside a notebook. For the times when charging is necessary, it also supports fast charging.
Living a Blended Lifestyle
The new Samsung Notebook 9 Pen and Notebook 9 (2018) feature key improvements designed to support our personal and professional lives. These features include:
Notebooks designed for mobility – both devices are among the thinnest in their class for easy portability, and working on-the-go. Made from Metal12 complemented by Micro Arc Oxidation (MAO) technology, both devices are lighter than most metal laptops, yet durable with the advanced MAO treatment of oxide coating on the surface to ensure to-do lists get finished regardless of where life takes you.
Added security and convenience – both devices come with Windows Hello built-in for secure authentication through the fingerprint sensor without having to type in a password, while the Notebook 9 Pen also features an IR front-facing camera for facial recognition login through Windows Hello. Both PCs feature the Privacy Folder for storing sensitive data.
High-quality performance – combining the best in portability with a high-quality display and strong performance, the RealView display brings a bright and accurate premium image, with lifelike colors and incredible brightness ideal for using the device both indoors and out. Both devices also feature the latest 8th generation Intel Core i7 processor and Samsung Dual Channel Memory for quick speeds to handle jobs like running multiple programs or viewing and rendering high-quality graphics without reduced performance or speed.
Productivity and collaboration – Samsung offers several tools that appeal to tech-savvy professionals such as Samsung Link Sharing, which allows users to transfer videos, photos and documents stored on their PC to another computer or smart device*. Additional software solutions available on both devices include Samsung Message that enable users to send messages from their PC to contacts saved on their smartphone**, as well as preinstalled Voice Note for voice-activated note taking, and Studio Plus for custom content creation and editing.
The Samsung Notebook 9 Pen and Notebook 9 (2018) will be available in select countries starting in December 2017 in Korea, and in the first quarter of 2018 in the U.S. Both the Notebook 9 Pen and Notebook 9 (2018) will be displayed at CES 2018 in Las Vegas, NV.
Samsung Notebook 9 (2018) and Notebook 9 Pen Product Specifications***
Intel® HD Graphics / NVIDIA® GeForce® MX150 (GDDR5 2GB)
Intel® HD Graphics
1,250g – 1,290g
Titan Silver/Crush White
Titan Silver/Crush White
309.4 x 208 x 14.9mm
347.9 x 229.4 x 15.4mm
310.5 x 206.6 x 14.6-16.5mm
USB-C x 1, USB 3.0 X 2, HDMI X1, uSD, HP/Mic, DC-in
Thunderbolt 3 x 1 (or USB type C), USB 3.0 x 2, USB 2.0 x 1, HDMI x 1, uSD x 1, HP/Mic x 1, DC-in
USB-C x 1, USB 3.0 X 1, HDMI X1, uSD, HP/Mic, DC-in
13.3” RealViewDisplay | Full HD (1920 x 1080) | sRGB95% | △E < 2.5 | Max 500nits
15.0” RealViewDisplay | Full HD (1920 x 1080) | sRGB95% | △E < 2.5 | Max 500nits
13.3” Samsung RealViewTouch, FHD (1920 x 1080), sRGB95%, △E < 2.5, Max 450nits
IR camera, 720p
1.5w X 2
1.5w X 2
1.5w X 2
Integrated S Pen (Bundle)
Backlit KBD, Precision Touchpad
Backlit KBD, Precision Touchpad
Backlit KBD, Precision Touchpad
65W Adapter (DC-in)
65W Adapter (DC-in)
45W small adapter (DC-in)
*Size limit for each file is 1GB, and up to 2GB of contents can be shared within 48 hours.
**The feature utilizes SMS and the carrier may charge for the usage depending on the data plan. Feature availability varies depending on the region.
***All functionality, features, specifications and other product information provided in this document including, but not limited to, the benefits, design, pricing, components, performance, availability, and capabilities of the product are subject to change without notice or obligation.
As excitement about new film “Star Wars: The Last Jedi” has reached fever pitch, it’s worth reflecting on those behind the enduring popularity of the Star Wars universe: the fans. The Star Wars community has fueled the success of Star Wars themed toys, games and products for several decades. It’s this dedication that has led the Star Wars Limited Edition POWERbot to become the most popular design, selling more than five times the number of previous POWERbot models when comparing launching period. Samsung’s design team reveals how they worked with the Star Wars community to create a product designed by fans, for fans.
Collaborating with Super Fans
So where did the idea originally come from? Dongwook Kim from the Digital Business Division revealed: “I’m a big fan of Star Wars and I began to wonder about how to make a vacuum cleaner more fun, exciting, and something that users could enjoy using daily. Then, it came to me: How about Star Wars designs on the POWERbot vacuum cleaners?”
The device strongly evokes the likenesses of two of Star Wars’ most iconic images: the masks of Darth Vader and a Stormtrooper. Each edition includes the detailed features of the two characters, particularly their sounds. For example, when the Darth Vader unit starts, the vacuum will play ‘The Imperial March’ and mimic the Sith Lord’s chillingly mechanical inhale and exhale. When you turn on the Stormtrooper version, you will hear ‘let’s go’ with the theme song playing in the background.
To fulfill user expectations, Samsung Electronics also collaborated with an online community of Star Wars fans. The members of the community are so passionate about Star Wars that many of them say being a Star Wars fan is their job. Some decorate their vehicles with Star Wars images, and others mimic the voice of Darth Vader. In fact, a member of the online fan community provided the voice for the Darth Vader edition POWERbot.
The collaborative efforts with 20 members of the online fan community helped the design team to come up with the most detailed features, such as the color scheme, the number of grills, how deep the eyes should look and feel, and other different details. As the two sides worked together, the POWERbot continually evolved.
Overcoming Design Challenges
The designer’s task was to create a product that conveys that essence of Darth Vader from the movie. To find the perfect design, the team members focused on the color and the appearance of the eyes.
They studied many different shades before they discovered Death Black, a color that perfectly fits into the character. Sangin Lee from Design 2 Group, right below, said, “Death Black is really fitting for a character like Darth Vader: it’s just the right shade of menacing.” He also added, “I never knew there were so many different shades of black, it wasn’t easy to narrow it down to just one.”
The eyes obscured by the mask on Darth Vader’s face create a particularly sinister feeling, so the design team made that part of the product even darker. Yet if the shape of the eyes was even slightly irregular or asymmetrical, the whole look was distorted. The team went through a lengthy process of trial and error as they changed the dimensions over and over again, with similar painstaking attention to detail required for the Stormtrooper model.
“We drew the characters on the products so many times and printed out tens of mockups. But, seeing it in two dimensions on the screen and then seeing it in three after applying it to the product are very different experiences,” said Sangin Lee. “We decided to use silk printing to ensure the quality but when we first brought the design to silk printing experts, some said it would be too difficult.”
Thankfully, the design team was able to locate a silk printing expert who lives in GwangJu, a city 268 km away from Seoul. This expert was invited to consult during their business trip to Vietnam, where they visited the production site and conducted training sessions fitting the design to the POWERbot.
Keeping Stars Wars at the Center of Design
It wasn’t just the appearance of the POWERbot which was transformed: the design team applied the Star Wars concept to every part of the Star Wars Limited Edition POWERbot.
Nooree Na from UX Innovation, left above, suggested that they add the voices from the movie. “When we first started this project, I watched the whole series of Star Wars films and read each and every script from cover to cover,” she explained. “Focusing on the Stormtroopers and Darth Vader, I narrowed down the quotes that could be applied to our product. The previous model has a very friendly voice but Darth Vader is not that type of character, as we all know. Even so, we didn’t want the cleaner to sound like it is ordering our users about! It was tough work finding just the right expressions to fit the product.”
Sunghoon Jung from UX Innovation, right above, who was in charge of packaging design, understands how important it is to meet the expectations of the fans: “Many super fans keep the packaging as well as the products, so we paid a lot of attention to designing the box as well as what’s inside it.”
In addition to an eye-catching design, the Star Wars Limited Edition POWERbot promises all the powerful cleaning technology and features of the original VR7000. CycloneForce technology offers high-end suction power and Edge Clean Master, helps the unit clean close to walls and edges. What’s more, the product has a Navigation Camera that recognizes the structure of the entire house and finds the fastest cleaning route and the FullView Sensor 2.0 that allows the unit to clean around home decorations and valuable personal items as small as 10mm.
“The team members say that when you are using Edge Clean Master, it looks as if Darth Vader has taken out his lightsaber,” said Sangin Lee from Design 2 Group. “We actually had a lot of fun in completing this challenging project, and we hope users have fun as well, as they use the product.”
This is the first time that Samsung Electronics has applied iconic movie characters to its home appliance products. Samsung hopes that the Star Wars Limited Edition POWERbot can help users feel that cleaning is no longer a chore, but is something they can enjoy doing, as they try the many different fun features.
Siti Arofa teaches a first grade class at SD Negeri Sidorukan in Gresik, East Java. Many of her students start the school year without foundational reading skills or even an awareness of how fun books can be. But she noticed that whenever she read out loud using different expressions and voices, the kids would sit up and their faces would light up with excitement. One 6-year-old student, Keyla, loves repeating the stories with a full imitation of Siti’s expressions. Developing this love for stories and storytelling has helped Keyla and her classmates improve their reading and speaking skills. She’s just one child. Imagine the impact that the availability of books and skilled teachers can have on generations of schoolchildren.
In Indonesia today, it's estimated that for every 100 children who enter school, only 25 exit meeting minimum international standards of literacy and numeracy. This poses a range of challenges for a relatively young country, where nearly one-third of the population—or approximately 90 million people—are below the age of 15.
We’ve consistently heard from Indonesian educators and nonprofits that there’s a need for more high-quality storybooks. With $2.5 million in grants, the nonprofits will create a free digital library of children's stories that anyone can contribute to. Many Googlers based in our Jakarta office have already volunteered their time to translate existing children’s stories into Bahasa Indonesia to increase the diversity of reading resources that will live on this digital platform.
The nonprofits will develop teaching materials and carry out teacher training in eastern Indonesia to enhance teaching methods that improve literacy, and they’ll also help Indonesian authors and illustrators to create more engaging books for children.
Through our support of this work, we hope we can inspire a lifelong love of reading for many more students like Keyla.
Samsung Electronics and Amazon Prime Video today announced the entire Prime Video HDR library is now available in HDR10+, a new open standard that leverages dynamic metadata to produce enhanced contrast and colors on an expanded range of televisions. The Prime Video HDR10+ catalogue includes hundreds of hours of content such as Prime Originals The Grand Tour, The Marvelous Mrs. Maisel, Jean-Claude Van Johnson, The Tick and The Man in the High Castle plus hundreds of licensed titles. Prime Video is the first streaming service provider to deliver HDR10+ content to its users. HDR10+ is available on the entire Samsung 2017 UHD TV lineup – including the premium QLED TV models.
“We are thrilled to announce HDR10+ content for consumers,” said Sang Yoon Kim, Vice President of Smart TV Business Development at Samsung Electronics America. “The launch marks the first opportunity for consumers and the industry to experience HDR10+ technology through a streaming service.”
The HDR10+ technology on Prime Video incorporates dynamic metadata that allows high dynamic range (HDR) TVs to adjust brightness levels on a scene-by-scene or even frame-by-frame basis. By applying individualized tone mapping to each scene, HDR10+ delivers an incredible viewing experience for next generation Samsung displays. The picture quality offers an enhanced visual experience with more detailed expressions, brighter shadow areas and more accurate color renderings that stay true to the creator’s original intent. Through the utilization of the precise “Bezier” based tone-mapping guides, Amazon is able to deliver the optimal viewing experience across a variety of TV models.
“We’re dedicated to offering Prime Video members the best possible viewing experience, and we are very excited for our members around the world to experience our content in HDR10+,” said Greg Hart, Vice President of Prime Video. “The viewing experience of HDR10+ combined with Prime Video’s award-winning content is ushering in a new era of entertainment for consumers on these devices.”
With the launch of HDR10+ on Prime Video along with the HDR10+ logo and certification partnership formed earlier this year by Samsung, 20th Century Fox and Panasonic demonstrates the adoption and growth of HDR10+ as well as a growing commitment to delivering a premium HDR experience.
About Prime Video
Amazon Video is a premium on-demand entertainment service that offers customers the greatest choice in what to watch, and how to watch it. Prime Video offers thousands of movies and TV shows, including popular licensed content plus critically-acclaimed and award-winning Prime Originals and Amazon Original Movies from Amazon Studios like The Tick, the most-watched Prime Video series worldwide The Grand Tour from Jeremy Clarkson, Richard Hammond and James May, and award-winning series like The Man in the High Castle, Mozart in the Jungle and more available for unlimited streaming as part of an Amazon Prime membership. Prime Video is also now available to customers in more than 200 countries and territories around the globe at www.primevideo.com.
Today’s leading home security products showed just how effective they really are when independent research facility AV-TEST studied each product’s responses to real-world threats. While some brands missed the mark, AVG Internet Security achieved a perfect score, blocking every one of the nearly 10,000 malware samples used in the test.
The study was conducted throughout September and October this year, and AV-TEST used a combination of recently-discovered malware and real-world threats that currently live and breathe on the internet. All types of malware were deployed—malicious websites, infected emails, Trojans, worms, viruses, and more.
The true achievement here was protecting against each and every one of the real-world threats used in the test. It provided a real-time demonstration of how AVG Internet Security reacts when it encounters unknown malware. Our AI-powered antivirus reacts swiftly, identifying, blocking, and protecting against any threat.
At RSNA 2017, the annual event from the Radiological Society of North America, attendees were intrigued by the new digital x-ray from Samsung. The reason was because it offered a solution that was able to provide the same image quality as other x-rays but with 50% less radiation. And now it has the approval in the US to offer a new way to carry out the procedure for patients.
Bright sunflowers were part of the ‘dose less, care more’ campaign at the show. The flowers were a natural choice for the initiative because of their ability to take up high concentrations of radioactive elements and toxins into their tissues. They therefore purify soils of contaminants. This illustration signified the ability of Samsung’s new x-ray machine to emit lower levels of radiation.
Reducing the amount of radiation is always the goal for radiologists. Although we’re exposed to low levels of radiation in everyday life from many sources – even the air we breathe – no one wants to expose patients to more radiation than is necessary. The Samsung GC85A premium digital x-ray is a welcome addition to hospitals then, as it emits half as much radiation – which is a huge breakthrough.
The S-Vue engine in the GC85A recently received approval from the US Food and Drug Administration (FDA). The FDA is the US regulator of medical treatments and its approval is the gateway for Samsung’s new x-ray to be rolled out across the country.
Making the GC85A so unique is the spatially adaptive multi-scale processing and advanced de-noising technology that enable high quality images even at half the dose. The technique was tested in a study by Professor Semin Chong from Chung-Ang University. He carried out what’s known in the industry as a phantom test whereby a model based on the human body is used to test the effects of x-ray technology without having to use to a real human. The image processing did indeed produce an equivalent image evaluation score when the dosage was halved.
In practice, patients who receive a chest x-ray from the GC85A are subjected to 8 microsieverts (μSv) of radiation. That’s the same radiation from eating 80 bananas. It’s also as low as the average effective dose absorbed into the body during a three-hour flight from Chicago to New York. In other words, the radiation levels of the Samsung GC85A are low and protect patients from higher levels of radiation that other machines emit.
“We are pleased with the approval of the US FDA,” said Insuk Song, Vice President of Health & Medical Equipment Business at Samsung Electronics. “We plan to expand dose reduction x-rays for other parts of the body such as the abdomen and limbs and infants in the future. We will also continue our efforts to reduce radiation.”
Developing 3D apps is complicated—whether you’re using a native graphics API or enlisting the help of your favorite game engine, there are thousands of graphics commands that have to come together perfectly to produce beautiful 3D visuals on your phone, desktop or VR headsets.
To help developers diagnose rendering and performance issues with their Android and desktop applications, we’re releasing a new tool called GAPID (Graphics API Debugger). With GAPID, you can capture a trace of your application and step through each graphics command one-by-one. This lets you visualize how your final image is built and isolate calls with issues, so you spend less time debugging through trial and error until you find the source of the problem.
The goal of GAPID is to help you save time and get the most out of your GPU. To get started with GAPID, download it, take your favorite application, and capture a trace!
From phones to speakers to watches and more, the Google Assistant is already available across a number of devices and languages—and now, it’s coming to Android tablets running Android 7.0 Nougat and 6.0 Marshmallow and phones running 5.0 Lollipop.
The Google Assistant, now on Tablets
With the Assistant on tablets, you can you can get help throughout your day—set reminders, add to your shopping list (and see that same list on your phone later), control your smart devices like plugs and lights, ask about the weather and more.
The Assistant on tablets will be rolling out over the coming week to users with the language set to English in the U.S.
Lollipop phones, introducing your Assistant
Earlier this year we first brought the Assistant to Android 6.0 Marshmallow and higher with Google Play Services. Today, we’re adding Android 5.0 Lollipop to the mix, so even more users can get help from the Google Assistant.
The Google Assistant on Android 5.0 Lollipop has started to roll out in to users with the language set to English in the U.S., UK, India, Australia, Canada and Singapore, as well as in Spanish in the U.S., Mexico and Spain. It’s also rolling out to users in Italy, Japan, Germany, Brazil and Korea. Once you get the update and opt-in, you’ll see an Assistant app icon in your “All apps” list.
So now the question is … What will you ask your Assistant first?
This week we’re looking at the ways the Google News Lab is working with news organizations to build the future of journalism. Yesterday, we learned about how the News Lab works with newsrooms to address industry challenges. Today, we’ll take a look at how it helps the news industry take advantage of new technologies.
From Edward R. Murrow’s legendary radio broadcasts during World War II to smartphones chronicling every beat of the Arab Spring, technology has had a profound impact on how stories are discovered, told, and reach new audiences. With the pace of innovation quickening, it’s essential that news organizations understand and take advantage of today’s emerging technologies. So one of the roles of the Google News Lab is to help newsrooms and journalists learn how to put new technologies to use to shape their reporting.
Our efforts to help this growing class of journalists focuses on two areas: curating Google data to fuel newsrooms’ work and building tools to make data journalism accessible.
On the curation side, we work with some of the world’s top data visualists to inspire the industry with data visualizations like Inaugurate and a Year in Language. We're particularly focused on ensuring news organizations can benefit from Google Trends data in important moments like elections. For example, we launched a Google Trends election hub for the German elections, highlighting Search interest in top political issues and parties, and worked with renowned data designer Moritz Stefaner to build a unique visualization to showcase the potential of the data to inform election coverage across European newsrooms.
We’re also building tools that can help make data journalism accessible to more newsrooms. We expanded Tilegrams, a tool to create hexagon maps and other cartograms more easily, to support Germany and France in the runup to the elections in both countries. And we partnered with the data visualization design team Kiln to make Flourish, a tool that offers complex visualization templates, freely available to newsrooms and journalists.
As new mediums of storytelling emerge, new techniques and ideas need to be developed and refined to untap the potential of these technologies for journalists. This year, we focused on two technologies that are making storytelling in journalism more compelling: virtual reality and drones.
We also looked to strengthen the ecosystem for VR journalism by growing Journalism 360, a group of news industry experts, practitioners and journalists dedicated to empowering experimentation in VR journalism. In 2017, J360 hosted in-person trainings on using VR in journalism from London to Austin, Hong Kong to Berlin. Alongside the Knight Foundation and the Online News Association, we provided $250,000 in grants for projects to advance the field of immersive storytelling.
Drones The recent relaxation of regulations by the Federal Aviation Administration around drones made drones more accessible to newsrooms across the U.S., leading to growing interest in drone journalism. Alongside the Poynter Institute and the National Press Photographers Association, we hosted four drone journalism camps across America where more than 300 journalists and photographers learned about legal, ethical and aeronautical issues of drone journalism. The camps helped inspire the use of drones in local and national news stories. Following the camps, we also hosted a leadership summit, where newsroom leaders convened to discuss key challenges on how to work together to grow this emerging field of journalism.
We want to help newsroom better understand and use artificial intelligence (AI), a technological development that hold tremendous promise—but also many unanswered questions. To try to get to some of the answers, we convened CTOs from the New York Times and the Associated Press to our New York office to talk about the future of AI in journalism and the challenges and opportunities it presents for newsrooms.
We also launched an experimental project with ProPublica, Documenting Hate, which uses AI to generate a national database for hate crime and bias incidents. Hate crimes in America have historically been difficult to track since there is very little official data collected at the national level. By using AI, news organizations are able to close some of the gaps in the data and begin building a national database.
2018 will no doubt bring more opportunity for journalists to innovate using technology. We’d love to hear from journalists about what technologies we can make more accessible and what kinds of programs or hackathons you’d like to see—let us know.
Developing for 3D is complicated. Whether you're using a native graphics API or
enlisting the help of your favorite game engine, there are thousands of graphics
commands that have to come together perfectly to produce beautiful 3D images on
your phone, desktop or VR headsets.
GAPID (Graphics API
Debugger) is a new tool that helps developers diagnose rendering and
performance issues with their applications. With GAPID, you can capture a trace
of your application and step through each graphics command one-by-one. This lets
you visualize how your final image is built and isolate problematic calls, so
you spend less time debugging through trial-and-error.
GAPID supports OpenGL ES on Android, and Vulkan on Android, Windows and Linux.
Debugging in action, one draw call at a time
GAPID not only enables you to diagnose issues with your rendering commands, but
also acts as a tool to run quick experiments and see immediately how these
changes would affect the presented frame.
Here are a few examples where GAPID can help you isolate and fix issues with
What's the GPU doing?
Why isn't my text appearing?!
Working with a graphics API can be frustrating when you get an unexpected
result, whether it's a blank screen, an upside-down triangle, or a missing mesh.
As an offline debugger, GAPID lets you take a trace of these applications, and
then inspect the calls afterwards. You can track down exactly which command
produced the incorrect result by looking at the framebuffer, and inspect the
state at that point to help you diagnose the issue.
What happens if I do X?
Using GAPID to edit shader code
Even when a program is working as expected, sometimes you want to experiment.
GAPID allows you to modify API calls and shaders at will, so you can test things
What if I used a different texture on this object?
What if I changed the calculation of bloom in this shader?
With GAPID, you can now iterate on the look and feel of your app without having
to recompile your application or rebuild your assets.
Whether you're building a stunning new desktop game with Vulkan or a beautifully
immersive VR experience on Android, we hope that GAPID will save you both time
and frustration and help you get the most out of your GPU. To get started with
GAPID and see just how powerful it is, download it, take your
favorite application, and capture a
By Sean Kelly, Product Management Director, Messenger
Over the holidays we all want to take time to celebrate, reflect and stay in touch with the people we love. It’s been 25 years since the first text message was sent, sparking a revolution in how we keep in touch with each other, and we’ve come a long way since then. The art of conversation has evolved, and we’re no longer limited by just text. Just think about it, now you can group video chat with masks, choose from thousands of emojis or GIFs to add more color to your messages, and immediately capture and share photos, even when you’re already in a conversation.
At Messenger, we know that every message matters and we’re focused on helping people say what they want to say, however they want to say it. Through GIFs, videos, group conversations or group video chats, Messenger gives people the freedom to connect in the way that is most relevant to them — expressive, humorous, visual, heartfelt or simply convenient.
By understanding how people are messaging today, we can continue to make Messenger the best place to connect with the people you care about most. We’re excited to look back over the year, and highlight the top ways we saw Messenger’s 1.3 billion strong global community connect and share with each other in 2017.
Video chat took a leap forward.
Chatting face-to-face is perfect for those spontaneous moments when text just isn’t enough. We heard from people that they wanted more than just one-to-one video chats, which is why we launched group video chat about a year ago. The experience is the same whether you’re on Android or iOS — and we introduced a few new Augmented Reality features – like masks, filters and reactions – in June to make your video chats more fun and expressive.
Overall, there were 17 billion realtime video chats on Messenger, marking two times as many video chat sessions in 2017 compared to 2016.
People video chatted across each other all around the world — including Antarctica! We can only imagine how awesome it was to share a moment in front of icebergs and penguins with friends and family back home.
Visual messaging brings our conversations to life.
Visual messaging is now our new universal language, making our conversations more joyful, impactful, and let’s face it, a whole lot more fun! This year we continued investing in our powerful and fast camera, pre-loaded with thousands of stickers, frames and other effects to make your conversations better than ever. Here’s how people expressed themselves and added delight to their conversations this year:
People shared over 500 billion emojis in 2017 , or nearly 1.7 billion every day
GIFs are a popular choice too, with 18 billion GIFs shared in 2017
On average, there are over 7 billion conversations taking place on Messenger every day in 2017.
At the same time, on average, 260 million new conversation threads were started every day in 2017.
The holidays are popular times to connect with each other online as well as offline. New Years, Mother’s Day, and Valentine’s Day were three of the top five most active days for chats on Messenger.
And it’s all about the power of groups.
This year we introduced several new features to make group chats in Messenger more fun and useful, including @mentions, now it’s easy to jump right back in to the conversation to answer someone’s question or to provide a response, and reactions, giving you an option quickly showing acknowledgement or expressing how you feel in an easy way with an emoji. We found that 2017 was both a year to connect with the people you care about most, as well as the groups of people you care about most:
In 2017, 2.5 million new groups were created on Messenger EVERY day
The average group chat includes 10 people.
Since launching in March, people shared more than 11 billion reactions, up from two billion shared in June. The most popular reaction in a group conversation is
Of course, reactions are just as fun in 1:1 conversations, too. The most popular reaction in 1:1 conversations is
We offer people on Messenger fun ways to customize their group chats. The most popular custom emoji is the red heart, and the most popular custom chat color is red.
As the year comes to a close, we want to extend a big thank you to our Messenger community – we are so happy to be part of your every day lives and we can’t wait to help you chat longer, play games, take great photos or message a business in 2018! Thank you for trusting us with your messages that matter.
*Methodology – the Messenger data is reflective of January 2017 through November 2017
Over the past few years, many people have experienced virtual reality with headsets like Cardboard, Daydream View, and higher-end PC units like Oculus Rift and HTC Vive. Now, augmented reality has the potential to reach people right on their mobile devices. AR can bring information to you, and that digital information can enhance the experience you have with their physical space. However, AR is new, so creators need to think carefully when it comes to designing intuitive user interactions.
From our own explorations, we’ve learned a few things about design patterns that may be useful for creators as they consider mobile AR platforms. For this post, we revisited our learnings from designing for head-mounted displays, mobile virtual reality experiences, and depth-sensing augmented reality applications. First-party apps such as Google Earth VR and Tilt Brush allow users to explore and create with two positionally-tracked controllers. Daydream helped us understand the opportunities and constraints for designing immersive experiences for mobile. Mobile AR introduces a new set of interaction challenges. Our explorations show how we’ve attempted to adapt emerging patterns to address different physical environments and the need to hold the phone throughout an entire application session.
Key design considerations
Mobile constraints. Achieving immersive interactions is possible through a combination of the device's camera, real-world coordinates for digital objects, and input methods of screen-touch and proximity. Since mobile AR experiences typically require at least one hand to hold the phone at all times, it's important for interactions to be discoverable, intuitive, and easy to achieve with one or no hands. The mobile device is the user’s window into the augmented world, so creators must also consider ways to make their mobile AR experiences enjoyable and usable for varying screen sizes and orientations.
Mobile mental models and dimension-shifts. Content creators should keep in mind existing mental models of mobile AR users. 2D UI patterns, when locked to the user’s mobile screen, tend to lead to a more sedentary application experience; however, developers and designers can get creative with world-locked UI or other interaction patterns that encourage movement throughout the physical space in order to guide users toward a deeper and richer experience. The latter approach tends to be a more natural way to get users to learn and adapt to the 3D nature of their application session and more quickly begin to appreciate the value a mobile AR experience has to offer — such as observing augmented objects from many different angles.
Environmental considerations. Each application has a dedicated "experience space," which is a combination of the physical space and range of motion the experience requires. Combined with ARCore's ability to detect varying plane sizes or overlapping planes at different elevations, this opens the door to unique volumetric responsive design opportunities that allow creators to determine how digital objects should react or scale to the constraints of the user's mobile play space. Visual cues like instructional text or character animations can direct users to move around their physical spaces in order to reinforce the context switch to AR and encourage proper environment scanning.
Visual affordances. Advanced screen display and lighting technology makes it possible for digitally rendered objects to appear naturally in the user’s environment. Volumetric UI patterns can complement a 3D mobile AR experience, but it’s still important that they stand out as interactive components so users get a sense of selection state and functionality. In addition to helping users interact with virtual objects in their environment, it’s important to communicate the planes that the mobile device detects in order to manage the users’ expectations for where digital items can be placed.
Mobile AR 2D interactions. With mobile AR, we’ve seen applications of a 2D screen-locked UI which gives users a “magic-hand” pattern to engage with the virtual world via touch inputs. The ability to interact with objects from a distance can be very empowering for users. However, because of 2D UI patterns' previous association with movement-agnostic experiences, users are less likely to move around. If physical movement is a desired form of interaction, mobile AR creators can consider ways to more immediately use plane detection, digital object depth, and phone-position to motivate exploration of a volumetric space. But be wary of too much 2D UI, as it can break immersion and disconnect the user from the AR experience.
Mobile AR immersive interactions. To achieve immersion, we focused on core mobile AR interaction mechanics ranging from object interaction, browsing, information display, and visual guidance. It's possible to optimize for readability, usability, and scale by considering ways to use a fixed position or dynamic scaling for digital objects. Using a reticle or raycast from the device is one way to understand intent and focus, and designers and developers may find it appropriate to have digital elements scale or react based on where the camera is pointing. Having characters react with an awareness to how close the user is, or revealing more information about an object as a user approaches, are a couple great examples of how creators can use proximity cues to reward exploration and encourage interaction via movement.
These are some early considerations for designers. Our team will be publishing guidelines for mobile AR design soon. There are so many unique problems that mobile AR can solve and so many delightful experiences it can unlock. We’re looking forward to seeing what users find compelling and sharing what we learn along the way, too. In the meantime, continue making and breaking things!
In October 2015, as part of our Digital News Initiative (DNI)—a partnership between Google and news publishers in Europe to support high-quality journalism through technology and innovation—we launched the €150 million DNI Innovation Fund. Today, we’re announcing the recipients of the fourth round of funding, with 102 projects in 26 European countries being offered €20,428,091 to support news innovation projects. This brings the total funding offered so far to €94 million.
In this fourth round, we received 685 project submissions from 29 countries. Of the 102 projects funded today, 47 are prototypes (early stage projects requiring up to €50,000 of funding), 33 are medium-sized projects (requiring up to €300,000 of funding) and 22 are large projects (requiring up to €1 million of funding).
In the last round, back in July, we saw a significant uptick in interest in fact checking projects. That trend continues in this round, especially in the prototype project category. In the medium and large categories, we encouraged applicants to focus on monetization, which led to a rise in medium and large projects seeking to use machine learning to improve content delivery and transform more readers into subscribers. Overall, 21 percent of the selected projects focus on the creation of new business models, 13 percent are about improving content discovery by using personalisation at scale. Around 37 percent of selected projects are collaborations between organizations with similar goals. Other projects include work on analytics measurement, audience development and new advertising opportunities. Here’s a sample of some of the projects funded in this round:
[Prototype] Stop Propaghate - Portugal
With €49,804 of funding from the DNI Fund, Stop Propaghate is developing an API supported by machine learning techniques that could help news media organizations 1) automatically identify if a portion of news reporting contains hate speech, and 2) predict the likelihood of a news piece to generate comments containing hate speech. The project is being developed by the Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), a research & development institute located at University of Porto in Portugal.
[Medium] SPOT - France
Spot is an Artificial Intelligence-powered marketplace for curating, translating and syndicating valuable articles among independent media organizations, and is being developed by VoxEurop, a European news and debate website. With €281,291 of funding from the DNI Innovation Fund, Spot will allow publishers to easily access, buy and republish top editorial from European news organizations in their own languages, using AI data-mining technologies, summarization techniques and automatic translation technologies, alongside human content curation.
[Large] ML-based journalistic content recommendation system - Finland
Digital news media companies produce much more content than ever reaches their readers, because existing content delivery mechanisms tend to serve customers en masse, instead of individually. With €490,000 of funding from the DNI Innovation Fund, Helsingin Sanomat will develop a content recommendation system, using machine learning technologies to learn and adapt according to individual user behavior, and taking into account editorial directives.
The recipients of fourth round funding were announced at a DNI event in London, which brought together people from across the news industry to celebrate the impact of the DNI and Innovation Fund. Project teams that received funding in Rounds 1, 2 or 3 shared details of their work and demonstrated their successes in areas like local news, fact checking and monetization.
Since February 2016, we’ve evaluated more than 3,700 applications, carried out 935 interviews with project leaders, and offered 461 recipients in 29 countries a total of €94 million. It’s clear that these projects are helping to shape the future of high-quality journalism—and some of them are already directly benefiting the European public. The next application window will open in the spring. Watch out for details on the digitalnewsinitiative.com website and check out all DNI funded projects!
As 2017 draws to a close, it’s time to look back on the year that was with our annual Year in Search. As we do every year, we analyzed Google Trends data to see what the world was searching for.
2017 was the year we asked “how…?” How do wildfires start? How to calm a dog during a storm? How to make a protest sign? In fact, all of the “how” searches you see in the video were searched at least 10 times more this year than ever before. These questions show our shared desire to understand our experiences, to come to each other’s aid, and, ultimately, to move our world forward.
Many of our trending questions centered around the tragedies and disasters that touched every corner of the world. Hurricanes devastated the Caribbean, Houston and Florida. An earthquake struck Mexico City. Famine struck Somalia, and Rohingya refugees fled for safety. In these moments and others, our collective humanity shined as we asked “how to help” more than ever before.
We also searched for ways to serve our communities. People asked Google how to become police officers, paramedics, firefighters, social workers, activists, and other kinds of civil servants. Because we didn’t just want to help once, we wanted to give back year round.
Searches weren’t only related to current events—they were also a window into the things that delighted the world. “Despacito” had us dancing—and searching for its meaning. When it came to cyberslang like “tfw” and “ofc,” we were all ¯\_(ツ)_/¯. And, finally, there was slime. We searched how to make fluffy, stretchy, jiggly, sticky, and so many more kinds of slime….then we searched for how to clean slime out of carpet, and hair, and clothes.
From “how to watch the eclipse” and “how to shoot like Curry,” to “how to move forward” and “how to make a difference,” here’s to this Year in Search. To see the top trending lists from around the world, visit google.com/2017.
Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world's top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.
I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.
That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.
Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.
Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us.
The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems.
Once again, the science of AI has no borders, neither do its benefits.
Samsung Electronics is the driving force behind the technology securing the way we shop, the way we communicate, even the way we travel to different countries. The company is fully committed to growing the smart card industry through its integrated circuits (IC) that are present in a number of vital items we use every day. And as a leading player in the space, Samsung is already working with major organizations to help them to better protect consumers.
Take SIM cards, electronic IDs, e-passports and credit cards. One thing they all have in common is the fact that they all use smart card ICs. Samsung has been an important part of the movement to this ubiquity of the technology as the industry switches from older, less secure solutions such as magnetic strip cards.
For example, Samsung has led the SIM card market since 2006. And in 2013, Samsung was the first in the industry to be accredited with a CC EAL7 smart card IC, the highest level of security certification. The company will continue to expand here and elsewhere in the smart card IC market in the future thanks to the value-added solutions it is incorporating into its products, such as embedded Secure Element (eSE) or embedded flash.
Expansion Based on Robust Security
The Samsung S3FT9MF smart card IC supports both ISO7816 contact and ISO14443 contactless interfaces, and is CC EAL6+ (Common Criteria Evaluation Assurance Level) certified, providing strong security countermeasures against various security threats such as template attacks, power attacks and reverse engineering.
Strong security and durability are of utmost importance when it comes to smart card ICs that hold extremely personal and private information. With a high level of security and a fast yet durable embedded flash solution with up to 500,000 write/erase cycles, Samsung’s smart card IC will continuously keep users’ personal information safe and sound in various forms of smart cards.
Samsung’s most recent smart card IC, S3FT9MF, is the solution for payment cards issued by financial institutions, such as Swiss banks, and has been expanding the smart card IC business to government IDs and other payment applications. The S3FT9MF is steadily being adopted as the main IC by several clients recently including a European tier-1 card manufacturer. Also, electronic IDs equipped with Samsung’s S3FTM9F are expected to become available in the first half of 2018.
Traditional magnetic strip cards have been the de-facto solution for ID and payment cards since the 1960’s but concerns rose as the static information on the strips could easily be cloned. Smart card ICs have recently become an alternative solution with their stronger security attributes, as well as the versatility of the technology.
Such benefits of the smart card ICs have driven the transition to EMV (Europay, Mastercard, Visa; the global standard for payment cards with smart card ICs) cards in developed countries as well as the expanding adoption of e-passports and government-issued IDs. In the government ID sector alone, market estimates show an average of seven-percent in annual growth in quantity for smart card ICs from 2015 through 2022.* And as the market grows, Samsung is ready to expand.
When extreme weather threatens a community, people that may not normally follow local news rely on their local reporters and meteorologists to keep them informed and help them stay safe. We wanted to see which videos on Facebook are being viewed beyond a publisher’s own followers, and reaching the community at large.
Using internal Facebook data, we looked at some of the Facebook Lives from Hurricane Irma and Hurricane Harvey where a high percentage of its watch time was consumed by people who are in the same market of the publisher, but aren’t fans of the Page. From meteorologists going live from the studio to answer pressing questions in real-time to stations and newspapers sharing live cams of the affected areas, here are some ways local publishers used Facebook Live to cover extreme weather and reach a larger audience.
Steve Weagle, chief meteorologist ofWPTV 5, went live on Facebook from the studio to provide continuous coverage on Hurricane Irma. He answered questions from people in the community as WPTV 5 reporters were live-on-the-air in the background.
KPRC2 in Houston simulcast their coverage of Hurricane Harvey, showcasing a press conference with the governor as well as the latest updates from the station’s chief meteorologist.
A reporter from the Corpus Christi Caller-Times went live to relay some of the personal stories he was hearing from members of the community affected by Hurricane Harvey.
ABC Action News went live for the press conference of Governor Scott, who gave an update on Hurricane Irma while it was a category 5 storm with winds of 185 mph.
KCBDNews Channel 11 displayed live video cams of areas in the path of Hurricane Harvey as the storm made landfall. The station updated the post with the latest information throughout the day.
Meteorologist Jamie Ertle of WTOC-TV in Savannah, Georgia went live from her phone, answering questions from her Facebook audience about Hurricane Irma. Questions ranged from advice on how to evacuate and whether it was too late to leave.
10 News WTSPshared a live breaking news press conference to report on fatalities from Hurricane Irma.
11 Alivereporters Neima Abdulahi and Joe Flocaari flew directly into Hurricane Irma with hurricane hunters, and then went live on Facebook to answer questions about their experience.
Fox 8 Newswent live from a dog rescue shelter in Northeast Ohio where dogs were transferred to make room for rescued animals in the aftermath of Hurricane Harvey.
The Houston Chronicle went live as J.J. Watt and his Houston Texans teammates distributed relief supplies to people affected by Hurricane Harvey.
Fox 4in Fort Myers, did a simulcast of their Hurricane Irma coverage to Facebook live that featured in-studio meteorologists and on-the-ground reporting.
If you’re interested in utilizing Facebook Live to reach new audiences, check out our Live Best Practices.
Samsung Electronics recently announced the launch of Relúmĭno, an application that works in conjunction with the Gear VR to help those living with low vision see the world more clearly.
The app provides users with a visual aid that’s more approachable and affordable than prohibitively expensive alternatives. In the video below, see how the team behind Relúmĭno was inspired to create the vision-enhancing app, and how its convenient functions make it easier for millions of people around the world to read a book, watch TV, and explore the world around them.
Today we are announcing that Facebook has decided to move to a local selling structure in countries where we have an office to support sales to local advertisers. In simple terms, this means that advertising revenue supported by our local teams will no longer be recorded by our international headquarters in Dublin, but will instead be recorded by our local company in that country.
We believe that moving to a local selling structure will provide more transparency to governments and policy makers around the world who have called for greater visibility over the revenue associated with locally supported sales in their countries.
It is our expectation that we will make this change in countries where we have a local office supporting advertisers in that country. That said, each country is unique, and we want to make sure we get this change right. This is a large undertaking that will require significant resources to implement around the world. We will roll out new systems and invoicing as quickly as possible to ensure a seamless transition to our new structure. We plan to implement this change throughout 2018, with the goal of completing all offices by the first half of 2019.
Our headquarters in Menlo Park, California, will continue to be our US headquarters and our offices in Dublin will continue to be the site of our international headquarters.
Samsung Electronics today launched a total of three new newsrooms, Samsung Newsroom Netherlands and Samsung Newsroom Belgium (in Flemish and in French), which hubs for local media and consumers to access Samsung-related content.
Samsung Newsroom Netherlands merges Dutch-language resources with content from the Global Newsroom, while Samsung Newsroom Belgium offers Samsung-related content in both Flemish and in French. Both feature the latest updates on the company and its products, as well as information on industry trends, Samsung events, and inspirational stories regarding Samsung’s citizenship initiatives. Visitors to the sites will also enjoy access to a wide collection of multimedia resources including images, videos, infographics and event livestreams.
The platforms will also highlight local projects such as Samsung Electronics Benelux B.V.’s In-Traffic Reply, a smartphone app that enhances road safety by sending automated responses to calls and texts while users are behind the wheel.
The launches of Samsung Newsroom Netherlands and Belgium bring the company’s total number of newsrooms to 19 – a figure that also includes the Global Newsroom, as well as local editions in the U.S., Korea, Vietnam, Brazil, India (in English and in Hindi), Germany, Russia, Mexico, the U.K., Argentina, Malaysia, Italy, South Africa and Spain. Going forward, Samsung plans to expand this network with even more local-language communication channels to bring its vision to a wider media & consumer base around the world.
Americans know the importance of Dec. 7, 1941, when a Japanese attack on Pearl Harbor resulted in the deaths of 2,400 Americans. Another Dec. 7 event is worth remembering, one that isn’t known by nearly as many people, yet is “connected in an interesting way to the events that were unleashed on Dec. 7, 1941,” writes Brad Smith, Microsoft president, and Carol Ann Browne, Microsoft director of executive communications, in a new post in the “Today in Technology” series on LinkedIn.
On Dec. 7, 1932, Americans opened their newspapers to read about a controversial decision by the U.S. to grant a visa to a foreigner named Albert Einstein. Granting such a visa was opposed by some Americans, who suspected the world-famous physicist of many things, including being a Communist, with concerns that he would hurt the U.S.
The opposite was the case, as we all know now. Einstein would play a role in bringing World War II to an end by urging President Roosevelt to launch what would become the Manhattan Project to create the world’s first atomic weapon.
“None of this means that every person who wants to enter the country should be permitted to do so,” Smith and Browne write. “Immigration remains complicated, in the United States and every other country. But Einstein’s story reminds us of the enormous upside of attracting the best talent in the world – and the fact that the fullness of this talent and its potential contributions only emerge over time. At a time when Congress (hopefully) will soon be turning its attention to DACA (Deferred Action for Childhood Arrivals) and a new generation of Dreamers, and at a time when high-skilled talent from some countries confront a bureaucratic green card backlog, immigration remains an opportunity not just for immigrants, but for all of us who were born in the United States that can benefit from their presence.”
Dec. 7, Smith and Browne write, is a date for remembering “the terrible forces that can divide us. As well as the more hopeful steps that can bring us together.”
Craig Tranter is a former educator, and now serves as a technology presenter for Cisco. This blog is part of a series on advancements and opportunities in education. All views are his own. One thing that all teachers will be fairly familiar with by now is the use of Interactive White Boards (IWBs). However, the […]
Can you say "shake"? This week we’re making the introduction to some of #teampixel’s furry friends who always make the day a little brighter. From fabulous felines to a French Bulldog in PJs, scroll through this week’s “pawsome” picks and get to know the pets of #teampixel—13/10 would portrait mode again.
Want to get your Pixel photos featured on The Keyword? Make sure to tag your photos with #teampixel for the opportunity to see your photos here next!
Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between December 01 and December 08. As with previous round-ups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavior characteristics, indicators of compromise, and how our customers are automatically […]