Fostering a love for reading among Indonesian kids

Fostering a love for reading among Indonesian kids

Siti Arofa teaches a first grade class at SD Negeri Sidorukan in Gresik, East Java. Many of her students start the school year without foundational reading skills or even an awareness of how fun books can be. But she noticed that whenever she read out loud using different expressions and voices, the kids would sit up and their faces would light up with excitement. One 6-year-old student, Keyla, loves repeating the stories with a full imitation of Siti’s expressions. Developing this love for stories and storytelling has helped Keyla and her classmates improve their reading and speaking skills. She’s just one child. Imagine the impact that the availability of books and skilled teachers can have on generations of schoolchildren.

In Indonesia today, it’s estimated that for every 100 children who enter school, only 25 exit meeting minimum international standards of literacy and numeracy. This poses a range of challenges for a relatively young country, where nearly one-third of the population—or approximately 90 million people—are below the age of 15.  

To help foster a habit of reading, Google.org, as part of its $50M commitment to close global learning gaps, is supporting Inibudi, Room to Read and Taman Bacaan Pelangi, to reach 200,000 children across Indonesia.

We’ve consistently heard from Indonesian educators and nonprofits that there’s a need for more high-quality storybooks. With $2.5 million in grants, the nonprofits will create a free digital library of children’s stories that anyone can contribute to. Many Googlers based in our Jakarta office have already volunteered their time to translate existing children’s stories into Bahasa Indonesia to increase the diversity of reading resources that will live on this digital platform.

The nonprofits will develop teaching materials and carry out teacher training in eastern Indonesia to enhance teaching methods that improve literacy, and they’ll also help Indonesian authors and illustrators to create more engaging books for children.   

Through our support of this work, we hope we can inspire a lifelong love of reading for many more students like Keyla.

Photo credit: Room to Read


Save development time with our new 3D debugging tool

Save development time with our new 3D debugging tool

Developing 3D apps is complicated—whether you’re using a native graphics API or enlisting the help of your favorite game engine, there are thousands of graphics commands that have to come together perfectly to produce beautiful 3D visuals on your phone, desktop or VR headsets.

To help developers diagnose rendering and performance issues with their Android and desktop applications, we’re releasing a new tool called GAPID (Graphics API Debugger). With GAPID, you can capture a trace of your application and step through each graphics command one-by-one. This lets you visualize how your final image is built and isolate calls with issues, so you spend less time debugging through trial and error until you find the source of the problem.

The goal of GAPID is to help you save time and get the most out of your GPU. To get started with GAPID, download it, take your favorite application, and capture a trace!

The Google Assistant: coming to tablets and more Android phones

The Google Assistant: coming to tablets and more Android phones

From phones to speakers to watches and more, the Google Assistant is already available across a number of devices and languages—and now, it’s coming to Android tablets running Android 7.0 Nougat and 6.0 Marshmallow and phones running 5.0 Lollipop.

The Google Assistant, now on Tablets

With the Assistant on tablets, you can you can get help throughout your day—set reminders, add to your shopping list (and see that same list on your phone later), control your smart devices like plugs and lights, ask about the weather and more.

The Assistant on tablets will be rolling out over the coming week to users with the language set to English in the U.S.

Lollipop phones, introducing your Assistant

Earlier this year we first brought the Assistant to Android 6.0 Marshmallow and higher with Google Play Services. Today, we’re adding Android 5.0 Lollipop to the mix, so even more users can get help from the Google Assistant.

The Google Assistant on Android 5.0 Lollipop has started to roll out in to users with the language set to English in the U.S., UK, India, Australia, Canada and Singapore, as well as in Spanish in the U.S., Mexico and Spain. It’s also rolling out to users in Italy, Japan, Germany, Brazil and Korea. Once you get the update and opt-in, you’ll see an Assistant app icon in your “All apps” list.

So now the question is … What will you ask your Assistant first?

News Lab in 2017: Helping journalists use emerging technologies

News Lab in 2017: Helping journalists use emerging technologies

This week we’re looking at the ways the Google News Lab is working with news organizations to build the future of journalism. Yesterday, we learned about how the News Lab works with newsrooms to address industry challenges. Today, we’ll take a look at how it helps the news industry take advantage of new technologies.

From Edward R. Murrow’s legendary radio broadcasts during World War II to smartphones chronicling every beat of the Arab Spring, technology has had a profound impact on how stories are discovered, told, and reach new audiences. With the pace of innovation quickening, it’s essential that news organizations understand and take advantage of today’s emerging technologies. So one of the roles of the Google News Lab is to help newsrooms and journalists learn how to put new technologies to use to shape their reporting.

This past year, our programs, trainings and research gave journalists around the world the opportunity to experiment with three important technologies: data journalism, immersive tools like VR, AR and drones, and artificial intelligence (AI) and machine learning (ML).

Data journalism

The availability of data has had a profound impact on journalism, fueling powerful reporting, making complicated stories easier to understand, and providing readers with actionable real-time data. To inform our work in this space, this year we commissioned a study on the state of data journalism. The research found that data journalism is increasingly mainstream, with 51 percent of news organizations across the U.S. and Europe now having a dedicated data journalist.

Our efforts to help this growing class of journalists focuses on two areas: curating Google data to fuel newsrooms’ work and building tools to make data journalism accessible.

On the curation side, we work with some of the world’s top data visualists to inspire the industry with data visualizations like Inaugurate and a Year in Language. We’re particularly focused on ensuring news organizations can benefit from Google Trends data in important moments like elections. For example, we launched a Google Trends election hub for the German elections, highlighting Search interest in top political issues and parties, and worked with renowned data designer Moritz Stefaner to build a unique visualization to showcase the potential of the data to inform election coverage across European newsrooms.

Emerging Technologies_1.png
We worked with renowned designer Moritz Stefaner to build a visualization that showcased the topics and political candidates most searched in Germany during the German elections.

We’re also building tools that can help make data journalism accessible to more newsrooms. We expanded Tilegrams, a tool to create hexagon maps and other cartograms more easily, to support Germany and France in the runup to the elections in both countries. And we partnered with the data visualization design team Kiln to make Flourish, a tool that offers complex visualization templates, freely available to newsrooms and journalists.

Immersive storytelling

As new mediums of storytelling emerge, new techniques and ideas need to be developed and refined to untap the potential of these technologies for journalists. This year, we focused on two technologies that are making storytelling in journalism more compelling: virtual reality and drones.

Virtual reality
We kicked off the year by commissioning a research study to provide news organizations a better sense of how to use VR in journalism. The study found, for instance, that VR is better suited to convey an emotional impression rather than information. We looked to build on those insights by helping news organizations like Euronews and the South China Morning Post experiment with VR to create stories. And we documented best practices and learnings to share with the broader community.

We also looked to strengthen the ecosystem for VR journalism by growing Journalism 360, a group of news industry experts, practitioners and journalists dedicated to empowering experimentation in VR journalism. In 2017, J360 hosted in-person trainings on using VR in journalism from London to Austin, Hong Kong to Berlin. Alongside the Knight Foundation and the Online News Association, we provided $250,000 in grants for projects to advance the field of immersive storytelling.

Drones
The recent relaxation of regulations by the Federal Aviation Administration around drones made drones more accessible to newsrooms across the U.S., leading to growing interest in drone journalism.  Alongside the Poynter Institute and the National Press Photographers Association, we hosted four drone journalism camps across America where more than 300 journalists and photographers learned about legal, ethical and aeronautical issues of drone journalism. The camps helped inspire the use of drones in local and national news stories. Following the camps, we also hosted a leadership summit, where newsroom leaders convened to discuss key challenges on how to work together to grow this emerging field of journalism.

SCMP.png
A drone is being readied to capture footage across Hong Kong for the South China Morning Post’s immersive piece, “The Evolution of Hong Kong.”

Artificial intelligence

We want to help newsroom better understand and use artificial intelligence (AI), a technological development that hold tremendous promise—but also many unanswered questions. To try to get to some of the answers, we convened CTOs from the New York Times and the Associated Press to our New York office to talk about the future of AI in journalism and the challenges and opportunities it presents for newsrooms.

We also launched an experimental project with ProPublica, Documenting Hate, which uses AI to generate a national database for hate crime and bias incidents. Hate crimes in America have historically been difficult to track since there is very little official data collected at the national level. By using AI, news organizations are able to close some of the gaps in the data and begin building a national database.

Documenting Hate.png
Documenting Hate, our partnership with ProPublica, used AI to help create a national database to track hate crime and bias incidents.

Finally, to ensure fairness and inclusivity in the way AI is developed and applied, we partnered with MediaShift on a Diversifying AI hackathon. The event, which convened 45 women from across the U.S., focused on coming up with solutions that help bridge gaps between AI and media.

2018 will no doubt bring more opportunity for journalists to innovate using technology. We’d love to hear from journalists about what technologies we can make more accessible and what kinds of programs or hackathons you’d like to see—let us know.

Diagnose and understand your app’s GPU behavior with GAPID

Diagnose and understand your app’s GPU behavior with GAPID

Posted by Andrew Woloszyn, Software Engineer

Developing for 3D is complicated. Whether you’re using a native graphics API or
enlisting the help of your favorite game engine, there are thousands of graphics
commands that have to come together perfectly to produce beautiful 3D images on
your phone, desktop or VR headsets.

GAPID (Graphics API
Debugger)
is a new tool that helps developers diagnose rendering and
performance issues with their applications. With GAPID, you can capture a trace
of your application and step through each graphics command one-by-one. This lets
you visualize how your final image is built and isolate problematic calls, so
you spend less time debugging through trial-and-error.

GAPID supports OpenGL ES on Android, and Vulkan on Android, Windows and Linux.

Debugging in action, one draw call at a time

GAPID not only enables you to diagnose issues with your rendering commands, but
also acts as a tool to run quick experiments and see immediately how these
changes would affect the presented frame.

Here are a few examples where GAPID can help you isolate and fix issues with
your application:

What’s the GPU doing?

Why isn’t my text appearing?!

Working with a graphics API can be frustrating when you get an unexpected
result, whether it’s a blank screen, an upside-down triangle, or a missing mesh.
As an offline debugger, GAPID lets you take a trace of these applications, and
then inspect the calls afterwards. You can track down exactly which command
produced the incorrect result by looking at the framebuffer, and inspect the
state at that point to help you diagnose the issue.

What happens if I do X?

Using GAPID to edit shader code

Even when a program is working as expected, sometimes you want to experiment.
GAPID allows you to modify API calls and shaders at will, so you can test things
like:

  • What if I used a different texture on this object?
  • What if I changed the calculation of bloom in this shader?

With GAPID, you can now iterate on the look and feel of your app without having
to recompile your application or rebuild your assets.

Whether you’re building a stunning new desktop game with Vulkan or a beautifully
immersive VR experience on Android, we hope that GAPID will save you both time
and frustration and help you get the most out of your GPU. To get started with
GAPID and see just how powerful it is, download it, take your
favorite application, and capture a
trace
!

Best practices for mobile AR design

Best practices for mobile AR design

Over the past few years, many people have experienced virtual reality with headsets like Cardboard, Daydream View, and higher-end PC units like Oculus Rift and HTC Vive. Now, augmented reality has the potential to reach people right on their mobile devices. AR can bring information to you, and that digital information can enhance the experience you have with their physical space. However, AR is new, so creators need to think carefully when it comes to designing intuitive user interactions.

From our own explorations, we’ve learned a few things about design patterns that may be useful for creators as they consider mobile AR platforms. For this post, we revisited our learnings from designing for head-mounted displays, mobile virtual reality experiences, and depth-sensing augmented reality applications. First-party apps such as Google Earth VR and Tilt Brush allow users to explore and create with two positionally-tracked controllers. Daydream helped us understand the opportunities and constraints for designing immersive experiences for mobile. Mobile AR introduces a new set of interaction challenges. Our explorations show how we’ve attempted to adapt emerging patterns to address different physical environments and the need to hold the phone throughout an entire application session.

Key design considerations

mobar1

Mobile constraints. Achieving immersive interactions is possible through a combination of the device’s camera, real-world coordinates for digital objects, and input methods of screen-touch and proximity. Since mobile AR experiences typically require at least one hand to hold the phone at all times, it’s important for interactions to be discoverable, intuitive, and easy to achieve with one or no hands. The mobile device is the user’s window into the augmented world, so creators must also consider ways to make their mobile AR experiences enjoyable and usable for varying screen sizes and orientations.

mobar2

Mobile mental models and dimension-shifts. Content creators should keep in mind existing mental models of mobile AR users. 2D UI patterns, when locked to the user’s mobile screen, tend to lead to a more sedentary application experience; however, developers and designers can get creative with world-locked UI or other interaction patterns that encourage movement throughout the physical space in order to guide users toward a deeper and richer experience. The latter approach tends to be a more natural way to get users to learn and adapt to the 3D nature of their application session and more quickly begin to appreciate the value a mobile AR experience has to offer — such as observing augmented objects from many different angles.

mobar3

Environmental considerations. Each application has a dedicated “experience space,” which is a combination of the physical space and range of motion the experience requires. Combined with ARCore’s ability to detect varying plane sizes or overlapping planes at different elevations, this opens the door to unique volumetric responsive design opportunities that allow creators to determine how digital objects should react or scale to the constraints of the user’s mobile play space. Visual cues like instructional text or character animations can direct users to move around their physical spaces in order to reinforce the context switch to AR and encourage proper environment scanning.

mobar4

Visual affordances. Advanced screen display and lighting technology makes it possible for digitally rendered objects to appear naturally in the user’s environment. Volumetric UI patterns  can complement a 3D mobile AR experience, but it’s still important that they stand out as interactive components so users get a sense of selection state and functionality. In addition to helping users interact with virtual objects in their environment, it’s important to communicate the planes that the mobile device detects in order to manage the users’ expectations for where digital items can be placed.

mobar5

Mobile AR 2D interactions. With mobile AR, we’ve seen applications of a 2D screen-locked UI which gives users a “magic-hand” pattern to engage with the virtual world via touch inputs. The ability to interact with objects from a distance can be very empowering for users. However, because of 2D UI patterns’ previous association with movement-agnostic experiences, users are less likely to move around. If physical movement is a desired form of interaction, mobile AR creators can consider ways to more immediately use plane detection, digital object depth, and phone-position to motivate exploration of a volumetric space. But be wary of too much 2D UI, as it can break immersion and disconnect the user from the AR experience.

mobar6

Mobile AR immersive interactions. To achieve immersion, we focused on core mobile AR interaction mechanics ranging from object interaction, browsing, information display, and visual guidance. It’s possible to optimize for readability, usability, and scale by considering ways to use a fixed position or dynamic scaling for digital objects. Using a reticle or raycast from the device is one way to understand intent and focus, and designers and developers may find it appropriate to have digital elements scale or react based on where the camera is pointing. Having characters react with an awareness to how close the user is, or revealing more information about an object as a user approaches, are a couple great examples of how creators can use proximity cues to reward exploration and encourage interaction via movement.

What’s next?

These are some early considerations for designers. Our team will be publishing guidelines for mobile AR design soon. There are so many unique problems that mobile AR can solve and so many delightful experiences it can unlock. We’re looking forward to seeing what users find compelling and sharing what we learn along the way, too. In the meantime, continue making and breaking things!

Images in this post by Chris Chamberlain

Digital News Initiative: €20 million of funding for innovation in news

Digital News Initiative: €20 million of funding for innovation in news

In October 2015, as part of our Digital News Initiative (DNI)—a partnership between Google and news publishers in Europe to support high-quality journalism through technology and innovation—we launched the €150 million DNI Innovation Fund. Today, we’re announcing the recipients of the fourth round of funding, with 102 projects in 26 European countries being offered €20,428,091 to support news innovation projects. This brings the total funding offered so far to €94 million.

In this fourth round, we received 685 project submissions from 29 countries. Of the 102 projects funded today, 47 are prototypes (early stage projects requiring up to €50,000 of funding), 33 are medium-sized projects (requiring up to €300,000 of funding) and 22 are large projects (requiring up to €1 million of funding).

DNI_M7_Infographic.png

In the last round, back in July, we saw a significant uptick in interest in fact checking projects. That trend continues in this round, especially in the prototype project category. In the medium and large categories, we encouraged applicants to focus on monetization, which led to a rise in medium and large projects seeking to use machine learning to improve content delivery and transform more readers into subscribers. Overall, 21 percent of the selected projects focus on the creation of new business models, 13 percent are about improving content discovery by using personalisation at scale. Around 37 percent of selected projects are collaborations between organizations with similar goals. Other projects include work on analytics measurement, audience development and new advertising opportunities. Here’s a sample of some of the projects funded in this round:

[Prototype] Stop Propaghate – Portugal

With €49,804 of funding from the DNI Fund, Stop Propaghate is developing an API supported by machine learning techniques that could help news media organizations 1) automatically identify if a portion of news reporting contains hate speech, and 2) predict the likelihood of a news piece to generate comments containing hate speech. The project is being developed by the Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), a research & development institute located at University of Porto in Portugal.

[Medium] SPOT – France

Spot is an Artificial Intelligence-powered marketplace for curating, translating and syndicating valuable articles among independent media organizations, and is being developed by VoxEurop, a European news and debate website. With €281,291 of funding from the DNI Innovation Fund, Spot will allow publishers to easily access, buy and republish top editorial from European news organizations in their own languages, using AI data-mining technologies, summarization techniques and automatic translation technologies, alongside human content curation.

[Large] ML-based journalistic content recommendation system – Finland

Digital news media companies produce much more content than ever reaches their readers, because existing content delivery mechanisms tend to serve customers en masse, instead of individually. With €490,000 of funding from the DNI Innovation Fund, Helsingin Sanomat will develop a content recommendation system, using machine learning technologies to learn and adapt according to individual user behavior, and taking into account editorial directives.

The recipients of fourth round funding were announced at a DNI event in London, which brought together people from across the news industry to celebrate the impact of the DNI and Innovation Fund. Project teams that received funding in Rounds 1, 2 or 3 shared details of their work and demonstrated their successes in areas like local news, fact checking and monetization.

Since February 2016, we’ve evaluated more than 3,700 applications, carried out 935 interviews with project leaders, and offered 461 recipients in 29 countries a total of €94 million. It’s clear that these projects are helping to shape the future of high-quality journalism—and some of them are already directly benefiting the European public. The next application window will open in the spring. Watch out for details on the digitalnewsinitiative.com website and check out all DNI funded projects!

The Year in Search: the questions we asked in 2017

The Year in Search: the questions we asked in 2017

As 2017 draws to a close, it’s time to look back on the year that was with our annual Year in Search. As we do every year, we analyzed Google Trends data to see what the world was searching for.

2017 was the year we asked “how…?” How do wildfires start? How to calm a dog during a storm? How to make a protest sign? In fact, all of the “how” searches you see in the video were searched at least 10 times more this year than ever before. These questions show our shared desire to understand our experiences, to come to each other’s aid, and, ultimately, to move our world forward. 

growth of how searches over time

Many of our trending questions centered around the tragedies and disasters that touched every corner of the world. Hurricanes devastated the Caribbean, Houston and Florida. An earthquake struck Mexico City. Famine struck Somalia, and Rohingya refugees fled for safety. In these moments and others, our collective humanity shined as we asked “how to help” more than ever before.

We also searched for ways to serve our communities. People asked Google how to become police officers, paramedics, firefighters, social workers, activists, and other kinds of civil servants. Because we didn’t just want to help once, we wanted to give back year round.

Searches weren’t only related to current events—they were also a window into the things that delighted the world. “Despacito” had us dancing—and searching for its meaning. When it came to cyberslang like “tfw” and “ofc,” we were all ¯\_(ツ)_/¯. And, finally, there was slime. We searched how to make fluffy, stretchy, jiggly, sticky, and so many more kinds of slime….then we searched for how to clean slime out of carpet, and hair, and clothes.

From “how to watch the eclipse” and “how to shoot like Curry,” to “how to move forward” and “how to make a difference,” here’s to this Year in Search. To see the top trending lists from around the world, visit google.com/2017.

Search on.

Opening the Google AI China Center

Opening the Google AI China Center

Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world’s top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us. 

The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems. 

Once again, the science of AI has no borders, neither do its benefits.