With people spending more time on social media, many rightly wonder whether that time is good for us. Do people connect in meaningful ways online? Or are they simply consuming trivial updates and polarizing memes at the expense of time with loved ones?
Snooze is an option to temporarily unfollow a person, Page or group for 30 days.
Just weeks after getting access to CrowdTangle, La Gaceta Salta — a three-year old Argentine local news outlet with accelerating growth and a burgeoning online audience — was seeing value in their newsroom. The La Gaceta Salta team had a problem on their hands:…
Editor’s note: Senior Product Manager Berit Hoffmann leads Hire, a recruiting application Google launched earlier this year. In this post, she shares five ways businesses can improve their hiring process and secure great talent.
With 2018 quickly approaching, businesses are evaluating their hiring needs for the new year.
According to a recent survey of 2,200 hiring managers, 46 percent of U.S. companies need to hire more people but have issues filling open positions with the right candidates. If your company lacks great hiring processes and tools, it can be easy to make sub-optimal hiring decisions, which can have negative repercussions.
We built Hire to help businesses hire the right talent more efficiently, and integrated it with G Suite to help teams collaborate more effectively throughout the process. As your business looks to invest in talent next year, here are five ways to positively impact your hiring outcomes.
1. Define the hiring process for each role.
Take time to define each stage of the hiring process, and think about if and how the process may need to differ. This will help you better tailor your evaluation of each candidate to company expectations, as well as the qualifications of a particular role.
Earlier this year, Google reviewed a subset of its own interview data to discover the optimal number of interviews needed in the hiring process to evaluate whether a candidate is right for Google. Statistical analysis showed that four interviews was enough to predict with 86 percent confidence whether someone should be hired. Of course, every company’s hiring process varies according to size, role or industry—some businesses require double that number of interviews, whereas others may only need one interview.
Using Hire to manage your recruiting activities allows you to configure as many hiring process “templates” as you’d like, as well as use different ones for different roles. For example, you might vary the number of interview rounds based on department. Whatever process you define, you can bring all candidate activity and interactions together within Hire. Plus, Hire integrates with G Suite apps, like Gmail and Calendar, to help you coordinate the process.
2. Make jobs discoverable on Google Search.
For many businesses, sourcing candidates is one of the most time-consuming parts of the hiring process, so Google launched Job Search to help employers better showcase job opportunities in search. Since launching, 60 percent more employers show jobs in search in the United States.
Making your open positions discoverable where people are searching is an important part of attracting the best talent. If you use Hire to post a job, the app automatically formats your public job posting so it is discoverable by job seekers in Google search.
3. Make sure you get timely feedback from interviewers.
The sooner an interviewer provides feedback, the faster your hiring team can reach a decision, which improves the candidate’s experience. To help speed up feedback submissions, some companies like Genius.com use a “silent process” approach. This means interviewers are not allowed to discuss a candidate until they submit written feedback first.
Hire supports this “silent process” approach by hiding other people’s feedback from interviewers until they submit their own. We’ve found that this can incentivize employees to submit feedback faster because they want to see what their colleagues said. 63 percent of Hire interviewers leave feedback within 24 hours of an interview and 75 percent do so within 48 hours.
4. Make sure their feedback is thoughtful, too.
Beyond speedy feedback delivery, it’s perhaps more important to receive quality evaluations. Make sure your interviewers know how to write clear feedback and try to avoid common mistakes such as:
- Writing vague statements or summarizing a candidate’s resume.
- Restating information from rubrics or questionnaires rather than giving specific examples.
- Getting distracted by personality or evaluating attributes unrelated to the job.
One way you can encourage employees to stay focused when they interview a candidate. is to assign them a specific topic to cover in the interview. In Hire, topics are included in each interviewer’s Google Calendar invitation for easy reference without having to log into the app.
Maintaining a high standard for written feedback helps your team not only make hiring decisions today, but also helps you track candidates for future consideration. Even if you don’t hire someone for a particular role, the person might be a better fit for another position down the road. In Hire, you can find candidates easily with Google’s powerful search technology. Plus, Hire takes past interview feedback into account and ranks previous candidates higher if they’ve had positive feedback.
5. Stop letting internal processes slow you down.
If you don’t manage your hiring process effectively, it can be a huge time sink, especially as employers take longer and longer to hire talent. If your business lags on making a decision, it can mean losing a great candidate.
Implementing a solution like Hire can make it a lot easier for companies to move quickly through the hiring process. Native integrations with the G Suite apps you’re already using can help you cut down on copy-pasting or having to jump between multiple tabs. If you email a candidate in Gmail, it’s automatically synced in Hire so the rest of the hiring team can follow the conversation. And if you need to schedule a multi-slot interview, you can do so easily in Hire which lets you access interviewer availability or even book conference rooms. Since launching in July, we’ve seen the average time between posting a position and hiring a candidate decrease from 128 days to just 21 days (3 weeks!).
Posted by Dave Smith,
Developer Advocate for IoT
Creating robust connections between IoT devices can be difficult. WiFi and
Bluetooth are ubiquitous and work well in many scenarios, but suffer limitations
when power is constrained or large numbers of devices are required on a single
network. In response to this, new communications technologies have arisen to
address the power and scalability requirements for IoT.
Low-power Wireless Personal Area Network (LoWPAN) technologies are specifically
designed for peer-to-peer usage on constrained battery-powered devices. Devices
on the same LoWPAN can communicate with each other using familiar IP networking,
allowing developers to use standard application protocols like HTTP and CoAP.
The specific LoWPAN technology that we are most excited about is Thread: a secure,
fault-tolerant, low-power mesh-networking technology that is quickly becoming an
Today we are announcing API support for configuring and managing LoWPAN as a
part of Android Things Developer Preview 6.1, including first-class networking
support for Thread. By adding an 802.15.4 radio module to one of our developer
kits, Android Things devices can communicate directly with other peer
devices on a Thread network. These types of low-power connectivity solutions
enable Android Things devices to perform edge computing tasks,
aggregating data locally from nearby devices to make critical decisions without
a constant connection to cloud services. See the LoWPAN API guide
for more details on building apps to create and join local mesh networks.
OpenThread makes getting started with LoWPAN
on Android Things easy. Choose a supported radio platform, such as the Nordic nRF52840, and
firmware to enable it as a Network Co-Processor (NCP). Integrate the radio
into Android Things using the LoWPAN
NCP user driver. You can also expand support to other radio hardware by
building your own user drivers. See the LoWPAN user driver
API guide for more details.
To get started with DP6.1, use the Android Things Console to
download system images and flash existing devices. Then download the LoWPAN sample app to try it
out for yourself! LoWPAN isn’t the only exciting thing happening in the latest
release. See the release
notes for the full set of fixes and updates included in DP6.1.
Please send us your feedback by filing bug
reports and feature
requests, as well as asking any questions on Stack
Overflow. You can also join Google’s IoT
Developers Community on Google+, a great resource to get updates and discuss
ideas. Also, we have our new hackster.io
community, where everyone can share the amazing projects they have built. We
look forward to seeing what you build with Android Things!
For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.
Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?
Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.
One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.
Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.
I’m a Google AI researcher with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work on something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.
Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!
Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.
Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.
Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.
The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!
Seven out of ten customers visit a business or make a purchase based on info they found online1. With Posts on Google, businesses can share these timely updates right where people find your business on Search and Maps.
Creative posts help a new restaurant become a local hit
Igor Chang founded HAO Restaurant and Bar to bring Asian dishes to his beachside neighborhood in João Pessoa, Brazil. HAO keeps its doors open until 3AM, drawing customers for dinner and a night on the town.
Igor uses Posts on Google to get more reservations by sharing discounts on sushi, photos of cocktail specials, and links to videos of HAO’s live jazz band.
In just three months, Igor’s posts received more than 5,000 views. He’s noticed an increase in reservations and often hears customers mention his latest posts.
Mobile-friendly Posts bring visitors to a thrilling destination
Ramoji Film City is the backdrop for some of India’s biggest films, but it’s also a destination itself, where tourists can explore film sets, stay in luxury hotels, and see live performances.
When T Prasad joined the team at Ramoji Film City, he discovered that 80% of their customers found their business info on mobile devices. So he started sharing mobile-friendly posts with photos of Ramoji’s amusement park and other attractions.
After one month of posting on Google, Prasad saw a 20% increase in website pageviews. He also noticed a jump in calls from people who are excited to visit Ramoji Film City.
A candy shop satisfies their community’s sweet tooth
Raul Vega discovered that Mexican families in his Los Angeles neighborhood wanted to share the candies they loved as kids with their own children in the U.S. So he opened Dulceria Dulfi Mexican Candy Store, which carries sweets from De La Rosa triple-layer marzipan and peanut butter candies to Vero Manitas hand-shaped lollipops.
Raul uses Posts to share popular candies, seasonal specials and new arrivals with his customers online.
Since he started posting this summer, Raul has seen an average of seven new customers each week. Those customers make a big difference for his business. Posts also help him track which candies get the most attention, so he can update future orders for his shop.
Posting on Google is a way to share relevant, fresh content with people who search for businesses like yours online. Start posting and reach new customers through your Google listing today.
1Google/Ipsos Connect, “Benefits of a Complete Google My Business Listing,” October 2016. A total of N=15,904 adults 18-64, Google search or maps users, recent category purchasers (Bakery/Sweet shop, Auto, Spa/Hairdresser, Clothing, Bookstore/Logistics) in India, Australia, Germany, Turkey and the U.S.
This week we’re looking at how the Google News Lab is working with news organizations to build the future of journalism. So far, we shared how the News Lab works with newsrooms to address industry challenges and use emerging technologies. Today, we’ll take a look at the News Lab’s global footprint and its efforts to fuel innovation in newsrooms across the world.
Technology continues to change how journalists across the world report and tell stories. But how technology shapes journalism varies from region to region. This past year our team, the Google News Lab, conducted in-person trainings for journalists across 52 countries. Today, we take a look at the unique challenges of newsrooms in the regions we serve and how we’ve adapted our mission for each region to help build the future of journalism.
In Europe, it’s been another big year for politics with major general elections taking place in the Netherlands, France, UK, Germany and Norway. We wanted to ensure we were helping newsrooms cover these critical moments with the accuracy and depth they required. So, our efforts across these countries focused on helping newsrooms verify digital content in a timely fashion and providing training in digital skills for journalists.
- We helped the First Draft Coalition pioneer new collaborative reporting models to combat misinformation and verify news stories during the UK, French, and German elections. In France, we supported First Draft’s launch of CrossCheck; a collaboration among 37 newsrooms to verify or debunk online stories during the election. In the build up to the elections in the UK and Germany, we also supported fact-checking organizations Full Fact and Correctiv to help newsrooms identify new sources of information. These initiatives helped more than 500 European journalists verify content online and debunk 267 inaccurate stories shared on social during the French and German elections.
- Journalists across Europe used Google Trends to help visualize big political stories—here’s a peek at what they did.
- We continued to ramp up our efforts to train European journalists digital skills. We worked with The European Journalism Centre on the latest series of the News Impact Summit, providing large-scale training events on news gathering and storytelling, combined with design-thinking workshops for journalists in Rome, Hamburg, Budapest, Manchester and Brussels. And our partnership with Netzwerk Medien-Trainer has provided over a thousand journalists across northern Europe with expert training on data journalism, verification and mapping.
This year, we expanded our training and programs to the Asia Pacific, where we’ve tailored our approach to meet the specific needs of journalists across this diverse landscape. In a part of the world that is largely mobile-first (or mobile-only) and chat apps are the norm, there are a unique set of opportunities and challenges for newsrooms.
- In July, our first News Lab APAC Summit welcomed 180 guests from 150 news organizations across 15 countries to our offices in Singapore. Product specialists and experts from newsrooms across the region came together to share best practices, learn about emerging technologies, and engage in open dialogue on challenges critical to the news industry.
- In India, our Teaching Fellow has provided training and support to around 4K journalists and journalism students across the country. Our partnership with the Digital Identities team helped journalists in New Delhi experiment and engage new audiences with their stories.
- Working in partnership with News Lab, the South China Morning Post released an immersive virtual reality project to depict the changing landscape of Hong Kong over 170 years of history.
- We’re working to support research projects that tackle industry challenges – working with Media Diversity Australia to quantify issues of diversity and representation in the Australian news organizations, while in South Korea we’re supporting a study about the use of chat apps and their role in the news ecosystem.
Working with journalists across Latin America, we elevated new voices beyond traditional newsrooms, and helped established journalists experiment with new technology and research. In Brazil alone there are an estimated 139 million Internet users, providing a huge opportunity for news organizations to experiment and test new formats.
- We hosted the first Google News Lab Summit in LatAm Google’s HQ in São Paulo, which convened 115 journalists from across Brazil. Attendees from 71 organizations heard from product managers and industry experts about data journalism, immersive storytelling and verification.
- Impacto.jo, an experimental project in Brazil supported by the News Lab, helps journalists track the social impact of their reporting. As a part of the project, six organizations including Nexo Jornal, Folha de S. Paulo, Veja, Gazeta do Povo, Nova Escola and Projor will each track the public response and social reaction to their stories.
- In Brazil, we brought 300 journalists to a first-of-its-kind independent journalism festival in Rio de Janeiro to share ideas on how to engage audiences online with original journalism.
- Our Teaching Fellows based in Buenos Aires and Mexico City have travelled beyond Argentina and Mexico to provide 75 workshops in Chile, Colombia, Costa Rica, Panama, Peru, Puerto Rico and Uruguay
Middle East & Africa
We are focused on the growing number of mobile phone users, providing trainings for journalists on digital integration, as it remains a challenge in this part of the world.
- We’re working with Code for Africa and the World Bank to provide training to six thousand journalists across 12 major African cities. Their online learning course will provide self-paced lessons for journalists across Africa. They’re also working to support local Hacks/Hackers meetings to bring journalists and developers together to share new ideas.
- In South Africa, we held a GEN Editors Lab hackathon, in association with Code for Africa, that brought together 35 developers and journalists to tackle a range of topics including misinformation. This builds on our support for previous events in Nigeria and further afield in Australia, Japan, Indonesia, Italy, Ireland, Portugal and Taiwan.
A bulk of our in-person training work has been made possible by the Google News Lab Teaching Fellowship, which launched this year and enlists industry professionals, academic experts and experienced journalists to help us provide practical, in-person workshops and presentations across the world. In total, we hosted workshops, hackathons, and in-person trainings for 48K journalists across 52 countries.
Since we can’t be everywhere in-person, our online training center offers a round-the-clock service in 13 languages including Arabic, Polish, Hebrew and Hindi. We’re continuing to collaborate with training organizations around the world, and our growing Training Network now includes expert trainers in Europe, the U.S. and parts of Asia Pacific. There’s plenty more to do in 2018 and we’re looking forward to working with journalists and newsrooms across the world.
We’re making an update to News Feed ranking that will help surface videos people are proactively seeking out and coming back to on Facebook.
Jump, Google’s platform for virtual reality video capture that combines high-quality VR cameras and automated stitching, simplifies VR video production and helps filmmakers of all backgrounds and skill levels create amazing content. For the past two years, we’ve worked with NFL Films, one of the most recognized team of filmmakers in sports and the recipient of 112 Sports Emmys, to show what some of the best creators could do with Jump. Last year they debuted the first season of the virtual reality docuseries “Immersed,” and today the first three episodes of season two land on Daydream through YouTube VR and the NFL’s YouTube channel. This season will give fans an even more in-depth look at some of the NFL’s most unique personalities through three multi-episode arcs, each dedicated to a different player.
Shot with the latest Jump camera, the YI HALO, the first three episodes follow Chris Long, defensive end for the Philadelphia Eagles. Each episode gives fans a sneak peek into his life on and off the field, from his decision to donate his salary to charity to a look at how he prepares for game day. They’re available on Daydream through YouTube VR and the NFL’s YouTube channel today, with future episodes featuring Calais Campbell of the Jacksonville Jaguars and players from the 2018 Pro Bowl coming soon.
We caught up with NFL Films Senior Producer Jason Weber to hear more about season two, what it was like to use Jump and advice for other filmmakers creating VR video content for the first time:
What makes season two of “Immersed” different from the first season?
For season two of NFL “Immersed,” we wanted to try and dig a bit deeper into the stories of our players and give fans a real sense of what makes them who they are on and off the field, so we’re devoting three episodes to each subject.
VR is such a strong vehicle for empathy, and we wanted to focus the segments on players who are making a difference on and off the field. Chris Long is having a tremendous season with the Eagles as part of one of the best defenses in football, but his impact off the field is equally inspiring. Calais Campbell is a larger-than-life character whose influence is being felt on the resurgent Jaguars and throughout his new community in Jacksonville. And the Pro Bowl is a unique event where all of the best players come to have fun, and the relaxed setting gives us a chance to put cameras where they normally can’t go, giving viewers a true feeling of what it’s like to play with the NFL’s finest.
Last year was NFL Films’ first foray into shooting content in VR. What was it like filming and producing season one, and how did it compare to your experience with season two this year?
We learned a lot last season; in particular, the challenges of bringing multiple VR cameras to the sidelines on game day. As fast as the game looks on TV, it moves even faster when you’re right there on the field. Being able to get the footage we need, while also being ready to get out of the way when a ball or player is coming right at you took some time to master.
What makes shooting for VR different from traditional video content? What considerations do you have to make when shooting in VR?
Camera position is one big difference in shooting VR versus traditional video content. When we shoot in traditional video formats our cinematographers are constantly moving to capture different angles and frames of our subjects and scenes. With VR—though we’ve noticed a slight shift toward more cuts and angles in edited content in the past year—letting a scene play longer from one angle and positioning the camera so that the action takes advantage of the 360-degree range of vision helps differentiate a VR production from a standard format counterpart.
What did you like about using the Yi Halo to shoot the second season of “Immersed?”
With the Halo, we were most excited about the Up camera. You might not think that a camera facing straight up would make that much of a difference in football, but there’s a lot happening in that space that would get lost without it. We can now place a camera in front of a quarterback and have him throw the ball over the Halo, giving a viewer a more realistic view of that scene. With field goals, placing the camera under the goal posts produces a very interesting visual that wouldn’t work if the top camera wasn’t able to capture the ball going through the uprights. One of the most goosebump-inducing moments at any NFL game is a pregame flyover, which we can now capture in its full glory thanks to the top camera.
What tips do you have for other filmmakers thinking of getting into making VR video content?
Take the time to consider why you want to use VR versus traditional formats to tell your story. I work in both formats and feel that if I’m just telling the same story in VR that I would in HD, then I’m not doing my job as a VR filmmaker. VR gives you the unique opportunity to tell a story in a 360-degree space. Use that space to your advantage in creating something memorable.
Grab your Daydream View and head to YouTube today to watch the first three episodes, and be sure to check back soon to see the rest of season two of “Immersed.”
Editor’s Note: Today’s post is from Becky Torkelson, Computer Support Specialist Leader for Scheels, an employee-owned 27-store chain of sporting goods stores in the Midwest and West. Scheels uses Chrome browser and G Suite to help its 6,000 employees better serve customers and work together efficiently.
Whether customers come to Scheels stores to buy running shoes, fishing rods or camping stoves, they talk to associates who know the products inside and out. We hire people who are experts in what they’re selling and who have a passion for sports and outdoor life. They use Chrome browser and G Suite to check email and search for products from Chromebooks right on the sales floor, so they can spend more time serving customers.
That’s a big improvement over the days when we had a few PCs, equipped with IBM Notes and Microsoft Office, in the back rooms of each store. Associates and service technicians used the PCs to check email, enter their work hours or look up product specs or inventory for customers—but that meant they had to be away from customers and off the sales floor.
Starting in 2015, we bought 100 Chromebooks and 50 Chromeboxes, some of which were used to replace PCs in store departments like service shops. Using Chromebooks, employees in these departments could avoid manual processes that slowed down customer service in the past. With G Suite, Chrome devices and Chrome browser working together, our employees have access to Gmail and inventory records when they work in our back rooms. They can quickly log on and access the applications they need. This means they have more time on the sales floor for face-to-face interaction with customers.
Our corporate buyers, who analyze inventory and keep all of our stores stocked with the products we need, use Google Drive to share and update documents for orders instead of trading emails back and forth. We’re also using Google Sites to store employee forms and policy guides for easy downloading—another way people save time.
We use Chrome to customize home pages for employee groups, such as service technicians. As soon as they log in to Chrome, the technicians see the bookmarks they need—they don’t have to jump through hoops to find technical manuals or service requests. Our corporate buyers also see their own bookmarks at login. Since buyers travel from store to store, finding their bookmarks on any computer with Chrome is a big time-saver.
Our IT help desk team tells me that they hardly get trouble tickets related to Chrome. There was a very short learning curve when we changed to Chrome, an amazing thing when you consider we had to choose tools for a workforce of 6,000 people. The IT team likes Chrome’s built-in security—they know that malware and antivirus programs are running and updating in the background, so Chrome is doing security monitoring for us.
Since Scheels is employee-owned, associates have a stake in our company’s success. They’re excited to talk to customers who want to learn about the best gear for their favorite sports. Chrome and G Suite help those conversations stay focused on customer needs and delivering smart and fast service.
The end of the year is fast approaching, but the fun doesn’t have to end after the ball drops in Times Square. When you’re ready to kick off your travel plans for 2018 and take a weekend getaway, check out our trending destinations for travel inspiration, and our new features to feel confident you’re getting a good deal.
Get tips when the price is right
Long weekends are a great excuse to escape to warmer weather, but worrying about getting the best price for your vacation can be stressful. A recent study we did indicated that travelers are most concerned about finding the best price for their vacations – more than with any other discretionary purchase.
Google Flights can help you get out of town, even when you’re on a budget. Using machine learning and statistical analysis of historical flights data, Flights displays tips under your search results, and you can scroll through them to figure out when it’s best to book flights. Say you were searching for flights to Honolulu, and flights from your destination were cheaper than usual. A tip would say that “prices are less than normal” and by how much to indicate you’d spotted a deal. Or, if prices tend to remain steady for the date and place you’re searching for, a tip would indicate the price “won’t drop further” based on our price prediction algorithms.
Similarly, when you search for a hotel on Google, a new tip will appear above results when room rates are higher than usual, or if the area is busier than usual due to a holiday, music festival, or even a business conference. So if you’re planning a trip to San Francisco or Las Vegas, you can make sure you’re avoiding dates when big conferences are scheduled and hotel prices tend to be high.
If you prefer to wait and see if prices drop, you can now get email price alerts by opting into Hotel Price Tracking on your phone—this will roll out on desktop in the new year.
See the sights without breaking the bank
Vacation time is precious, and once you book your flight and hotel and arrive at your destination, it’s time to have some fun. Google Trips’ new Discounts feature helps you instantly access deals for ticketing and tours on top attractions and activities. Book and save on a tour of the Mayan ruins near Cancun, or get priority access to the top of the Eiffel Tower in Paris. No matter where you’re headed (and if you need ideas, read on), Trips makes it easy to browse and access fun stuff to do on your vacation without breaking the bank.
Head to the beach for MLK Weekend
People are already searching for flights for Martin Luther King Jr. weekend, from January 12th to 15th. The top trending domestic destinations for MLK Weekend offer a warm climate—with Florida and Hawaii taking the lead. For folks heading out of the country, Cancun and Bangkok are top beach destinations, whereas Rome and Tokyo are top cultural destinations.
Pick a tropical island or go across the pond for Presidents’ Day
Presidents’ weekend is right on the heels of Valentine’s Day next year, so it’s easy to take time off to spend time with that special someone, celebrate singledom with friends, or maybe just treat yourself to a solo adventure. Tropical islands are the most popular for a domestic getaway, with three of Hawaii’s major islands—Oahu, Maui, and Kauai—all trending in flight searches. For international flight searches, Cancun and Bangkok still top the list, but classic European cities like Paris, Rome, and Barcelona are also climbing in popularity.
Amidst celebrations with friends and family this December, start dreaming about your next winter getaway in the new year. We’ll help you get there.
By: Maria Angelidou-Smith, Product Management Director & Abhishek Bapna, Product Manager Facebook is home to a wide variety of publishers and creators who make videos that connect people, spark conversation and build community. Today we’re sharing updates on video distribution…
Traveling on a bus or train is the time for you to do your best music-listening, news-reading, and social-media scrolling … as long as you don’t miss your stop.
A new feature on Google Maps for Android keeps you on track with departure times, ETAs and a notification that tell you when to transfer or get off your bus or train. And you can track your progress along the way just like you can in driving, walking or biking directions.
To check out the new feature, head into Google Maps. Type your destination, select transit directions, then choose your preferred route. Tap the “Start” button to get on your way (and you won’t miss your stop this time).
Samsung Electronics today announced the new Samsung Notebook 9 Pen and three new versions of the Samsung Notebook 9 (2018), offering a mobile computing
As excitement about new film “Star Wars: The Last Jedi” has reached fever pitch, it’s worth reflecting on those behind the enduring popularity of the Star Wars
Siti Arofa teaches a first grade class at SD Negeri Sidorukan in Gresik, East Java. Many of her students start the school year without foundational reading skills or even an awareness of how fun books can be. But she noticed that whenever she read out loud using different expressions and voices, the kids would sit up and their faces would light up with excitement. One 6-year-old student, Keyla, loves repeating the stories with a full imitation of Siti’s expressions. Developing this love for stories and storytelling has helped Keyla and her classmates improve their reading and speaking skills. She’s just one child. Imagine the impact that the availability of books and skilled teachers can have on generations of schoolchildren.
In Indonesia today, it’s estimated that for every 100 children who enter school, only 25 exit meeting minimum international standards of literacy and numeracy. This poses a range of challenges for a relatively young country, where nearly one-third of the population—or approximately 90 million people—are below the age of 15.
To help foster a habit of reading, Google.org, as part of its $50M commitment to close global learning gaps, is supporting Inibudi, Room to Read and Taman Bacaan Pelangi, to reach 200,000 children across Indonesia.
We’ve consistently heard from Indonesian educators and nonprofits that there’s a need for more high-quality storybooks. With $2.5 million in grants, the nonprofits will create a free digital library of children’s stories that anyone can contribute to. Many Googlers based in our Jakarta office have already volunteered their time to translate existing children’s stories into Bahasa Indonesia to increase the diversity of reading resources that will live on this digital platform.
The nonprofits will develop teaching materials and carry out teacher training in eastern Indonesia to enhance teaching methods that improve literacy, and they’ll also help Indonesian authors and illustrators to create more engaging books for children.
Through our support of this work, we hope we can inspire a lifelong love of reading for many more students like Keyla.
Photo credit: Room to Read
Samsung Electronics and Amazon Prime Video today announced the entire Prime Video HDR library is now available in HDR10+, a new open standard that leverages
At RSNA 2017, the annual event from the Radiological Society of North America, attendees were intrigued by the new digital x-ray from Samsung. The reason was
Developing 3D apps is complicated—whether you’re using a native graphics API or enlisting the help of your favorite game engine, there are thousands of graphics commands that have to come together perfectly to produce beautiful 3D visuals on your phone, desktop or VR headsets.
To help developers diagnose rendering and performance issues with their Android and desktop applications, we’re releasing a new tool called GAPID (Graphics API Debugger). With GAPID, you can capture a trace of your application and step through each graphics command one-by-one. This lets you visualize how your final image is built and isolate calls with issues, so you spend less time debugging through trial and error until you find the source of the problem.
From phones to speakers to watches and more, the Google Assistant is already available across a number of devices and languages—and now, it’s coming to Android tablets running Android 7.0 Nougat and 6.0 Marshmallow and phones running 5.0 Lollipop.
The Google Assistant, now on Tablets
With the Assistant on tablets, you can you can get help throughout your day—set reminders, add to your shopping list (and see that same list on your phone later), control your smart devices like plugs and lights, ask about the weather and more.
The Assistant on tablets will be rolling out over the coming week to users with the language set to English in the U.S.
Lollipop phones, introducing your Assistant
Earlier this year we first brought the Assistant to Android 6.0 Marshmallow and higher with Google Play Services. Today, we’re adding Android 5.0 Lollipop to the mix, so even more users can get help from the Google Assistant.
The Google Assistant on Android 5.0 Lollipop has started to roll out in to users with the language set to English in the U.S., UK, India, Australia, Canada and Singapore, as well as in Spanish in the U.S., Mexico and Spain. It’s also rolling out to users in Italy, Japan, Germany, Brazil and Korea. Once you get the update and opt-in, you’ll see an Assistant app icon in your “All apps” list.
So now the question is … What will you ask your Assistant first?
This week we’re looking at the ways the Google News Lab is working with news organizations to build the future of journalism. Yesterday, we learned about how the News Lab works with newsrooms to address industry challenges. Today, we’ll take a look at how it helps the news industry take advantage of new technologies.
From Edward R. Murrow’s legendary radio broadcasts during World War II to smartphones chronicling every beat of the Arab Spring, technology has had a profound impact on how stories are discovered, told, and reach new audiences. With the pace of innovation quickening, it’s essential that news organizations understand and take advantage of today’s emerging technologies. So one of the roles of the Google News Lab is to help newsrooms and journalists learn how to put new technologies to use to shape their reporting.
This past year, our programs, trainings and research gave journalists around the world the opportunity to experiment with three important technologies: data journalism, immersive tools like VR, AR and drones, and artificial intelligence (AI) and machine learning (ML).
The availability of data has had a profound impact on journalism, fueling powerful reporting, making complicated stories easier to understand, and providing readers with actionable real-time data. To inform our work in this space, this year we commissioned a study on the state of data journalism. The research found that data journalism is increasingly mainstream, with 51 percent of news organizations across the U.S. and Europe now having a dedicated data journalist.
Our efforts to help this growing class of journalists focuses on two areas: curating Google data to fuel newsrooms’ work and building tools to make data journalism accessible.
On the curation side, we work with some of the world’s top data visualists to inspire the industry with data visualizations like Inaugurate and a Year in Language. We’re particularly focused on ensuring news organizations can benefit from Google Trends data in important moments like elections. For example, we launched a Google Trends election hub for the German elections, highlighting Search interest in top political issues and parties, and worked with renowned data designer Moritz Stefaner to build a unique visualization to showcase the potential of the data to inform election coverage across European newsrooms.
We’re also building tools that can help make data journalism accessible to more newsrooms. We expanded Tilegrams, a tool to create hexagon maps and other cartograms more easily, to support Germany and France in the runup to the elections in both countries. And we partnered with the data visualization design team Kiln to make Flourish, a tool that offers complex visualization templates, freely available to newsrooms and journalists.
As new mediums of storytelling emerge, new techniques and ideas need to be developed and refined to untap the potential of these technologies for journalists. This year, we focused on two technologies that are making storytelling in journalism more compelling: virtual reality and drones.
We kicked off the year by commissioning a research study to provide news organizations a better sense of how to use VR in journalism. The study found, for instance, that VR is better suited to convey an emotional impression rather than information. We looked to build on those insights by helping news organizations like Euronews and the South China Morning Post experiment with VR to create stories. And we documented best practices and learnings to share with the broader community.
We also looked to strengthen the ecosystem for VR journalism by growing Journalism 360, a group of news industry experts, practitioners and journalists dedicated to empowering experimentation in VR journalism. In 2017, J360 hosted in-person trainings on using VR in journalism from London to Austin, Hong Kong to Berlin. Alongside the Knight Foundation and the Online News Association, we provided $250,000 in grants for projects to advance the field of immersive storytelling.
The recent relaxation of regulations by the Federal Aviation Administration around drones made drones more accessible to newsrooms across the U.S., leading to growing interest in drone journalism. Alongside the Poynter Institute and the National Press Photographers Association, we hosted four drone journalism camps across America where more than 300 journalists and photographers learned about legal, ethical and aeronautical issues of drone journalism. The camps helped inspire the use of drones in local and national news stories. Following the camps, we also hosted a leadership summit, where newsroom leaders convened to discuss key challenges on how to work together to grow this emerging field of journalism.
We want to help newsroom better understand and use artificial intelligence (AI), a technological development that hold tremendous promise—but also many unanswered questions. To try to get to some of the answers, we convened CTOs from the New York Times and the Associated Press to our New York office to talk about the future of AI in journalism and the challenges and opportunities it presents for newsrooms.
We also launched an experimental project with ProPublica, Documenting Hate, which uses AI to generate a national database for hate crime and bias incidents. Hate crimes in America have historically been difficult to track since there is very little official data collected at the national level. By using AI, news organizations are able to close some of the gaps in the data and begin building a national database.
Finally, to ensure fairness and inclusivity in the way AI is developed and applied, we partnered with MediaShift on a Diversifying AI hackathon. The event, which convened 45 women from across the U.S., focused on coming up with solutions that help bridge gaps between AI and media.
2018 will no doubt bring more opportunity for journalists to innovate using technology. We’d love to hear from journalists about what technologies we can make more accessible and what kinds of programs or hackathons you’d like to see—let us know.
Posted by Andrew Woloszyn, Software Engineer
Developing for 3D is complicated. Whether you’re using a native graphics API or
enlisting the help of your favorite game engine, there are thousands of graphics
commands that have to come together perfectly to produce beautiful 3D images on
your phone, desktop or VR headsets.
GAPID (Graphics API
Debugger) is a new tool that helps developers diagnose rendering and
performance issues with their applications. With GAPID, you can capture a trace
of your application and step through each graphics command one-by-one. This lets
you visualize how your final image is built and isolate problematic calls, so
you spend less time debugging through trial-and-error.
GAPID supports OpenGL ES on Android, and Vulkan on Android, Windows and Linux.
GAPID not only enables you to diagnose issues with your rendering commands, but
also acts as a tool to run quick experiments and see immediately how these
changes would affect the presented frame.
Here are a few examples where GAPID can help you isolate and fix issues with
What’s the GPU doing?
Working with a graphics API can be frustrating when you get an unexpected
result, whether it’s a blank screen, an upside-down triangle, or a missing mesh.
As an offline debugger, GAPID lets you take a trace of these applications, and
then inspect the calls afterwards. You can track down exactly which command
produced the incorrect result by looking at the framebuffer, and inspect the
state at that point to help you diagnose the issue.
What happens if I do X?
Even when a program is working as expected, sometimes you want to experiment.
GAPID allows you to modify API calls and shaders at will, so you can test things
- What if I used a different texture on this object?
- What if I changed the calculation of bloom in this shader?
With GAPID, you can now iterate on the look and feel of your app without having
to recompile your application or rebuild your assets.
Whether you’re building a stunning new desktop game with Vulkan or a beautifully
immersive VR experience on Android, we hope that GAPID will save you both time
and frustration and help you get the most out of your GPU. To get started with
GAPID and see just how powerful it is, download it, take your
favorite application, and capture a
By Sean Kelly, Product Management Director, Messenger Over the holidays we all want to take time to celebrate, reflect and stay in touch with the people we love. It’s been 25 years since the first text message was sent, sparking a revolution in how we keep in touch with each other, and we’ve come a […]
Over the past few years, many people have experienced virtual reality with headsets like Cardboard, Daydream View, and higher-end PC units like Oculus Rift and HTC Vive. Now, augmented reality has the potential to reach people right on their mobile devices. AR can bring information to you, and that digital information can enhance the experience you have with their physical space. However, AR is new, so creators need to think carefully when it comes to designing intuitive user interactions.
From our own explorations, we’ve learned a few things about design patterns that may be useful for creators as they consider mobile AR platforms. For this post, we revisited our learnings from designing for head-mounted displays, mobile virtual reality experiences, and depth-sensing augmented reality applications. First-party apps such as Google Earth VR and Tilt Brush allow users to explore and create with two positionally-tracked controllers. Daydream helped us understand the opportunities and constraints for designing immersive experiences for mobile. Mobile AR introduces a new set of interaction challenges. Our explorations show how we’ve attempted to adapt emerging patterns to address different physical environments and the need to hold the phone throughout an entire application session.
Key design considerations
Mobile constraints. Achieving immersive interactions is possible through a combination of the device’s camera, real-world coordinates for digital objects, and input methods of screen-touch and proximity. Since mobile AR experiences typically require at least one hand to hold the phone at all times, it’s important for interactions to be discoverable, intuitive, and easy to achieve with one or no hands. The mobile device is the user’s window into the augmented world, so creators must also consider ways to make their mobile AR experiences enjoyable and usable for varying screen sizes and orientations.
Mobile mental models and dimension-shifts. Content creators should keep in mind existing mental models of mobile AR users. 2D UI patterns, when locked to the user’s mobile screen, tend to lead to a more sedentary application experience; however, developers and designers can get creative with world-locked UI or other interaction patterns that encourage movement throughout the physical space in order to guide users toward a deeper and richer experience. The latter approach tends to be a more natural way to get users to learn and adapt to the 3D nature of their application session and more quickly begin to appreciate the value a mobile AR experience has to offer — such as observing augmented objects from many different angles.
Environmental considerations. Each application has a dedicated “experience space,” which is a combination of the physical space and range of motion the experience requires. Combined with ARCore’s ability to detect varying plane sizes or overlapping planes at different elevations, this opens the door to unique volumetric responsive design opportunities that allow creators to determine how digital objects should react or scale to the constraints of the user’s mobile play space. Visual cues like instructional text or character animations can direct users to move around their physical spaces in order to reinforce the context switch to AR and encourage proper environment scanning.
Visual affordances. Advanced screen display and lighting technology makes it possible for digitally rendered objects to appear naturally in the user’s environment. Volumetric UI patterns can complement a 3D mobile AR experience, but it’s still important that they stand out as interactive components so users get a sense of selection state and functionality. In addition to helping users interact with virtual objects in their environment, it’s important to communicate the planes that the mobile device detects in order to manage the users’ expectations for where digital items can be placed.
Mobile AR 2D interactions. With mobile AR, we’ve seen applications of a 2D screen-locked UI which gives users a “magic-hand” pattern to engage with the virtual world via touch inputs. The ability to interact with objects from a distance can be very empowering for users. However, because of 2D UI patterns’ previous association with movement-agnostic experiences, users are less likely to move around. If physical movement is a desired form of interaction, mobile AR creators can consider ways to more immediately use plane detection, digital object depth, and phone-position to motivate exploration of a volumetric space. But be wary of too much 2D UI, as it can break immersion and disconnect the user from the AR experience.
Mobile AR immersive interactions. To achieve immersion, we focused on core mobile AR interaction mechanics ranging from object interaction, browsing, information display, and visual guidance. It’s possible to optimize for readability, usability, and scale by considering ways to use a fixed position or dynamic scaling for digital objects. Using a reticle or raycast from the device is one way to understand intent and focus, and designers and developers may find it appropriate to have digital elements scale or react based on where the camera is pointing. Having characters react with an awareness to how close the user is, or revealing more information about an object as a user approaches, are a couple great examples of how creators can use proximity cues to reward exploration and encourage interaction via movement.
These are some early considerations for designers. Our team will be publishing guidelines for mobile AR design soon. There are so many unique problems that mobile AR can solve and so many delightful experiences it can unlock. We’re looking forward to seeing what users find compelling and sharing what we learn along the way, too. In the meantime, continue making and breaking things!
Images in this post by Chris Chamberlain
In October 2015, as part of our Digital News Initiative (DNI)—a partnership between Google and news publishers in Europe to support high-quality journalism through technology and innovation—we launched the €150 million DNI Innovation Fund. Today, we’re announcing the recipients of the fourth round of funding, with 102 projects in 26 European countries being offered €20,428,091 to support news innovation projects. This brings the total funding offered so far to €94 million.
In this fourth round, we received 685 project submissions from 29 countries. Of the 102 projects funded today, 47 are prototypes (early stage projects requiring up to €50,000 of funding), 33 are medium-sized projects (requiring up to €300,000 of funding) and 22 are large projects (requiring up to €1 million of funding).
In the last round, back in July, we saw a significant uptick in interest in fact checking projects. That trend continues in this round, especially in the prototype project category. In the medium and large categories, we encouraged applicants to focus on monetization, which led to a rise in medium and large projects seeking to use machine learning to improve content delivery and transform more readers into subscribers. Overall, 21 percent of the selected projects focus on the creation of new business models, 13 percent are about improving content discovery by using personalisation at scale. Around 37 percent of selected projects are collaborations between organizations with similar goals. Other projects include work on analytics measurement, audience development and new advertising opportunities. Here’s a sample of some of the projects funded in this round:
[Prototype] Stop Propaghate – Portugal
With €49,804 of funding from the DNI Fund, Stop Propaghate is developing an API supported by machine learning techniques that could help news media organizations 1) automatically identify if a portion of news reporting contains hate speech, and 2) predict the likelihood of a news piece to generate comments containing hate speech. The project is being developed by the Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), a research & development institute located at University of Porto in Portugal.
[Medium] SPOT – France
Spot is an Artificial Intelligence-powered marketplace for curating, translating and syndicating valuable articles among independent media organizations, and is being developed by VoxEurop, a European news and debate website. With €281,291 of funding from the DNI Innovation Fund, Spot will allow publishers to easily access, buy and republish top editorial from European news organizations in their own languages, using AI data-mining technologies, summarization techniques and automatic translation technologies, alongside human content curation.
[Large] ML-based journalistic content recommendation system – Finland
Digital news media companies produce much more content than ever reaches their readers, because existing content delivery mechanisms tend to serve customers en masse, instead of individually. With €490,000 of funding from the DNI Innovation Fund, Helsingin Sanomat will develop a content recommendation system, using machine learning technologies to learn and adapt according to individual user behavior, and taking into account editorial directives.
The recipients of fourth round funding were announced at a DNI event in London, which brought together people from across the news industry to celebrate the impact of the DNI and Innovation Fund. Project teams that received funding in Rounds 1, 2 or 3 shared details of their work and demonstrated their successes in areas like local news, fact checking and monetization.
Since February 2016, we’ve evaluated more than 3,700 applications, carried out 935 interviews with project leaders, and offered 461 recipients in 29 countries a total of €94 million. It’s clear that these projects are helping to shape the future of high-quality journalism—and some of them are already directly benefiting the European public. The next application window will open in the spring. Watch out for details on the digitalnewsinitiative.com website and check out all DNI funded projects!
As 2017 draws to a close, it’s time to look back on the year that was with our annual Year in Search. As we do every year, we analyzed Google Trends data to see what the world was searching for.
2017 was the year we asked “how…?” How do wildfires start? How to calm a dog during a storm? How to make a protest sign? In fact, all of the “how” searches you see in the video were searched at least 10 times more this year than ever before. These questions show our shared desire to understand our experiences, to come to each other’s aid, and, ultimately, to move our world forward.
Many of our trending questions centered around the tragedies and disasters that touched every corner of the world. Hurricanes devastated the Caribbean, Houston and Florida. An earthquake struck Mexico City. Famine struck Somalia, and Rohingya refugees fled for safety. In these moments and others, our collective humanity shined as we asked “how to help” more than ever before.
We also searched for ways to serve our communities. People asked Google how to become police officers, paramedics, firefighters, social workers, activists, and other kinds of civil servants. Because we didn’t just want to help once, we wanted to give back year round.
Searches weren’t only related to current events—they were also a window into the things that delighted the world. “Despacito” had us dancing—and searching for its meaning. When it came to cyberslang like “tfw” and “ofc,” we were all ¯\_(ツ)_/¯. And, finally, there was slime. We searched how to make fluffy, stretchy, jiggly, sticky, and so many more kinds of slime….then we searched for how to clean slime out of carpet, and hair, and clothes.
From “how to watch the eclipse” and “how to shoot like Curry,” to “how to move forward” and “how to make a difference,” here’s to this Year in Search. To see the top trending lists from around the world, visit google.com/2017.
Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world’s top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.
I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.
That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.
Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.
Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us.
The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems.
Once again, the science of AI has no borders, neither do its benefits.
Samsung Electronics is the driving force behind the technology securing the way we shop, the way we communicate, even the way we travel to different countries.
By Beth Loyd and Josh Mabry, News Partnerships When extreme weather threatens a community, people that may not normally follow local news rely on their local reporters and meteorologists to keep them informed and help them stay safe. We wanted…