One student’s quest to track endangered whales with machine learning

Ever since I can remember, music has been a huge part of who I am. Growing up, my parents formed a traditional Mexican trio band and their music filled the rooms of my childhood home. I’ve always felt deeply moved by music, and I’m fascinated by the emotions music brings out in people.

When I attended community college and took my first physics course, I was introduced to the science of music—how it’s a complex assembly of overlapping sound waves that we sense from the resulting vibrations created in our eardrums. Though my parents had always taken an artistic approach to playing with soundwaves, I took a scientific one. Studying acoustics opened up all kinds of doors for me I never thought were possible, from pursuing a career in electrical engineering—to studying whale calls using machine learning.

Daniel with his family

Daniel with his family during move in day for his first quarter at Cal Poly.

I applied to the Monterey Bay Aquarium Research Institute (MBARI) summer internship program, where I learned about John Ryan and Danelle Cline’s research using machine learning (ML) to monitor whale sounds. Once again, I found myself fascinated by sound, this time by analyzing the sounds of endangered blue and fin whales to further understand their ecology. By identifying and tracking the whales’ calls and changing migration patterns, scientists hope to gain insight on the broader impacts of climate change on ocean ecology, and how human influence negatively impacts marine life.

MBARI had already collected thousands of hours of audio, but it would have proven too cumbersome of a task to sift through all of that data to find whale calls. That’s what led Danelle to introduce me to machine learning. ML enables us to pick out patterns from very large data sets like MBARI’s audio recordings. By training the model using TensorFlow, we can efficiently sift through the data and track these whales with 98 percent accuracy. This tracking system can tell us how many calls were made in any given amount of time near the Monterey Bay and will enable scientists at MBARI to track their changing migration behavior, and advance their research on whale ecology and how human influence above water negatively impacts marine life below.

What started as a passion for music ended in a love of engineering thanks to the opportunity at MBARI. Before community college I had no idea what an engineer even did, and I certainly never imagined my music background would be relevant in using TensorFlow to identify and classify whale calls within a sea of ocean audio data. But I soon learned there’s more than one way to pursue a passion, and I’m excited for what the future holds—for marine life, for machine learning, and for myself. Following the whales on their journey has led me to begin mine.

How TensorFlow is powering technology around the world

Editor’s Note: AI is behind many of Google’s products and is a big priority for us as a company (as you may have heard at Google I/O yesterday). So we’re sharing highlights on how AI already affects your life in ways you might not know, and how people from all over the world have used AI to build their own technology.

Machine learning is at the core of many of Google’s own products, but TensorFlow—our open source machine learning framework—has also been an essential component of the work of scientists, researchers and even high school students around the world. At Google I/O, we’re hearing from some of these people, who are solving big (we mean, big) problems—the origin of the universe, that sort of stuff. Here are some of the interesting ways they’re using TensorFlow to aid their work.

Ari Silburt, a Ph.D. student at Penn State University, wants to uncover the origins of our solar system. In order to do this, he has to map craters in the solar system, which helps him figure out where matter has existed in various places (and at various times) in the solar system. You with us? Historically, this process has been done by hand and is both time consuming and subjective, but Ari and his team turned to TensorFlow to automate it. They’ve trained the machine learning model using existing photos of the moon, and have identified more than 6,000 new craters.

pasted image 0 (16).png

On the left is a picture of the moon, hard to tell where the heck those craters are. On the right we have an accurate depiction of crater distribution thanks to TensorFlow.

Switching from outer space to the rainforests of Brazil: Topher White (founder of Rainforest Connection) invented “The Guardian” device to prevent illegal deforestation in the Amazon. The devices—which are upcycled cell phones running on Tensorflow—are installed in trees throughout the forest, recognize the sound of chainsaws and logging trucks, and alert the rangers who police the area. Without these devices, the land must be policed by people, which is nearly impossible given the massive area it covers.

pasted image 0 (17).png

Topher installs guardian devices in the tall trees of the Amazon

Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. In 2016, we announced that machine learning was being used to aid diagnostic efforts in the area of DR, by analyzing a patient’s fundus image (photo of the back of the eye) with higher accuracy. Now we’re taking those fundus images to the next level with TensorFlow. Dr. Jorge Cuadros, an optometrist in Oakland, CA, is able to determine a patient’s risk of cardiovascular disease by analyzing their fundus image with a deep learning model.

pasted image 0 (18).png

Fundus image of an eye with sight-threatening retinal disease. With machine learning this image will tell doctors much more than eye health.

Good news for green thumbs of the world, Shaza Mehdi and Nile Ravenell are high school students who developed PlantMD, an app that lets you figure out if your plant is diseased. The machine learning model runs on TensorFlow, and Shaza and Nile used data from plantvillage.com and a few university databases to train the model to recognize diseased plants. Shaza also built another app that uses a similar approach to diagnose skin disease.

Shaza developed PlantMD, an app that recognizes diseased plants

Shaza’s story

To learn more about how AI can bring benefits to everyone, check out ai.google.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it’s not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

  • Overview – Smart Compose.gif

    Smart Compose can understand the context of an email and suggest phrases to help you write quickly and efficiently.

  • Overview – Photos.gif

    With Google Photos, we’re working on the ability for you to change black-and-white shots into color in just a tap.  

  • Overview – Smart Display.jpg

    With Smart Displays, the Google Assistant is becoming more visual.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you … to the concert poster you passed … to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We’re also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease

The human immune system is an astonishing diagnostic system, continuously adapting itself to detect any signal of disease in the body. Essentially, the state of the immune system tells a story about virtually everything affecting a person’s health. It may sound like science fiction, but what if we could “read” this story? Our scientific understanding…

The post Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease appeared first on The Official Microsoft Blog.

Earth to exoplanet: Hunting for planets with machine learning

For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.

Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?

Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.

One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.

NASA_PlanetsPart1_v03_1000px.gif
The measured brightness of a star decreases ever so slightly when an orbiting planet blocks some of the light. The Kepler space telescope observed the brightness of 200,000 stars for 4 years to hunt for these characteristic signals caused by transiting planets.

Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.

I’m a Google AI researcher with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work on something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.

Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!

Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

NASA_PlanetsPart2_v05_750px.gif
We used 15,000 labeled Kepler signals to train our machine learning model to identify planet signals. We used this model to hunt for new planets in data from 670 stars, and discovered two planets missed in previous searches.

Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.

sol-&-kepler-2.gif
Kepler 90 is the first known 8-planet system outside of our own. In this system, planets orbit closer to their star, and Kepler 90i orbits once every 14 days. (Note that planet sizes and distances from stars are not to scale.)

The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!

Opening the Google AI China Center

Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world’s top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us. 

The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems. 

Once again, the science of AI has no borders, neither do its benefits.

A look at one billion drawings from around the world

Since November 2016, people all around the world have drawn one billion doodles in Quick, Draw!, a web game where a neural network tries to recognize your drawings.

That includes 2.9 million cats, 2.9 million hot dogs, and 2.9 million drawings of snowflakes.

QuickDrawFaces_Blog_Snowflakes.gif

Each drawing is unique. But when you step back and look at one billion of them, the differences fade away. Turns out, one billion drawings can remind us of how similar we are.

Take drawings people made of faces. Some have eyebrows.

QuickDrawFaces_Blog_Eyebrows.gif

Some have ears.

QuickDrawFaces_Blog_Ears.gif

Some have hair.

QuickDrawFaces_Blog_Hair.gif

Some are round.

QuickDrawFaces_Blog_Round.gif

Some are oval.

QuickDrawFaces_Blog_Oval.gif

But when if you look at them all together and squint, you notice something interesting: Most people seem to draw faces that are smiling.

KeywordBlog_heroimage.png

These sorts of interesting patterns emerge with lots of drawings. Like how people all over the world have trouble drawing bicycles.

Blog_Bicycle_gridimage.png

With some exceptions from the rare bicycle-drawing experts.

QuickDrawFaces_Blog_GoodBikes.gif

If you overlay these drawings, you’ll also notice some interesting patterns based on geography. Like the directions that chairs might point:

Or the number of scoops you might get on an ice cream cone.

QuickDraw_Overlay_Images_icecream.png

(Source: Kyle McDonald)

And the strategy you might use to draw a star.

QuickDraw_Overlay_Images_stars.png

Still, no matter the drawing method, over the last 12 months, people have drawn more stars in Quick, Draw! than there are actual stars visible to the naked eye in the night sky.

stars

If there’s one thing one billion drawings has taught us, it’s that no matter who we are or where we’re from, we’re united by the fun of making silly drawings of the things around us.

Quick, Draw! began as a simple way to let anyone play with machine learning. But these billions of drawings are also a valuable resource for improving machine learning. Researchers at Google have used them to train models like sketch-rnn, which lets people draw with a neural network. And the data we gathered from the game powers tools like AutoDraw, which pairs machine learning with drawings from talented artists to help everyone create anything visual, fast.

There is so much we have yet to discover. To explore a subset of the billion drawings, visit our open dataset. To learn more about how Quick, Draw! was built, read this post. And to draw your own star (or ice cream cone, or bicycle), play a round of Quick, Draw!

Pivot to the cloud: intelligent features in Google Sheets help businesses uncover insights

When it comes to data in spreadsheets, deciphering meaningful insights can be a challenge whether you’re a spreadsheet guru or data analytics pro. But thanks to advances in the cloud and artificial intelligence, you can instantly uncover insights and empower everyone in your organization—not just those with technical or analytics backgrounds—to make more informed decisions.

We launched “Explore” in Sheets to help you decipher your data easily using the power of machine intelligence, and since then we’ve added even more ways for you to intelligently visualize and share your company data. Today, we’re announcing additional features to Google Sheets to help businesses make better use of their data, from pivot tables and formula suggestions powered by machine intelligence, to even more flexible ways to help you analyze your data.

Easier pivot tables, faster insights

Many teams rely on pivot tables to summarize massive data sets and find useful patterns, but creating them manually can be tricky. Now, if you have data organized in a spreadsheet, Sheets can intelligently suggest a pivot table for you.

In the Explore panel, you can also ask questions of your data using everyday language (via natural language processing) and have the answer returned as a pivot table. For example, type “what is the sum of revenue by salesperson?” or “how much revenue does each product category generate?” and Sheets can help you find the right pivot table analysis.

GIF

In addition, if you want to create a pivot table from scratch, Sheets can suggest a number of relevant tables in the pivot table editor to help you summarize your data faster.

Suggested formulas, quicker answers

We often use basic spreadsheet formulas like =SUM or =AVERAGE for data analysis, but it takes time to make sure all inputs are written correctly. Soon, you may notice suggestions pop up when you type “=” in a cell. Using machine intelligence, Sheets provides full formula suggestions to you based on contextual clues from your spreadsheet data. We designed this to help teams save time and get answers more intuitively.

Formula suggestions in Sheets

Even more Sheets features

We’re also adding more features to make Sheets even better for data analysis:

  • Check out a refreshed UI for pivot tables in Sheets, and new, customizable headings for rows and columns.
  • View your data differently with new pivot table features. When you create a pivot table, you can “show values as a % of totals” to see summarized values as a fraction of grand totals. Once you have a table, you can right-click on a cell to “view details” or even combine pivot table groups to aggregate data the way you need it. We’re also adding new format options, like repeated row labels, to give you more fine-tuned control of how to present your summarized data.
  • Create and edit waterfall charts. Waterfall charts are good for visualizing sequential changes in data, like if you want to see the incremental breakdown of last year’s revenue month-by-month. Select Insert > Chart > Chart type picker and then choose “waterfall.”
  • Quickly import or paste fixed-width formatted data files. Sheets will automatically split up the data into columns for you without needing a delimiter, like commas, between data.

These new Sheets features will roll out in the coming weeks—see specific details here. To learn more about how G Suite can help your business uncover valuable insights and speed up efficiencies, visit the G Suite website. Or check out these tips to help you get started with Sheets.

Get ready for AI to help make your business more productive

Editor’s note: Companies are evaluating how to use artificial intelligence to transform how they work. Nicholas McQuire, analyst at CCS Insight, reflects on how businesses are using machine learning and assistive technologies to help employees be more productive. He also provides tangible takeaways on how enterprises can better prepare for the future of work.

Employees are drowning in a sea of data and sprawling digital tools, using an average of 6.1 mobile apps for work purposes today, according to a recent CCS Insight survey of IT decision-makers. Part of the reason we’ve seen a lag in macro productivity since the 2008 financial crisis is that we waste a lot of time doing mundane tasks, like searching for data, booking meetings and learning the ins and outs of complex software.

According to Harvard Business Review, wasted time and inefficient processes—what experts call “organizational drag”—cost the U.S. economy a staggering $3 trillion each year. Employees need more assistive and personalized technology to help them tackle organizational drag and work faster and smarter.

Over the next five years, artificial intelligence (AI) will change the way we work and, in the process, transform businesses.

The arrival of AI in the enterprise is quickening

I witnessed a number of proofs of concept in machine learning in 2017; many speech-and image-based cognitive applications are emerging in specific markets, like fraud detection in finance, low-level contract analysis in the legal sector and personalization in retail. There are also AI applications emerging in corporate functions such as IT support, human resources, sales and customer service.

This shows promise for the technology, particularly in the face of challenges like trust, complexity, security and training required for machine learning systems. But it also suggests that the arrival of AI in enterprises could be moving more quickly than we think.

According to the same study, 58 percent of respondents said they are either using, trialling or researching the technology in their business. Decision-makers also said that on average, 29 percent of their applications will be enhanced with AI within the next two years—a remarkably bullish view.

New opportunities for businesses to evolve productivity

In this context, new AI capabilities pose exciting opportunities to evolve productivity and collaboration.

  • Assistive software: In the past year, assistive, cognitive features have become more prevalent in productivity software, such as search, quicker access to documents, automated email replies and virtual assistants. These solutions help surface contextually relevant information for employees and can automate simple, time-consuming tasks, like scheduling meetings, creating help desk tickets, booking conference rooms or summarizing content. In the future, they might also help firms improve and manage employee engagement, a critical human resources and leadership challenge at the moment.
  • Natural language processing: It won’t be long before we also see the integration of voice or natural language processing in productivity apps. The rise of speech-controlled smart speakers such as Google Home, Amazon Echo or the recently-launched Alexa for Business show that creating and completing documents using speech dictation, or using natural language queries to parse data or control functions in spreadsheets, is no longer in the realm of science fiction.
  • Security: Perhaps one of the biggest uses of AI will be to protect company information. Companies are beginning to use AI to protect against spam, phishing and malware in email, as well as the alarming rise of data breaches across the globe; the use of AI to detect threats and improve incident response will likely rise exponentially. Cloud security vendors with access to higher volumes of signals to train AI models are well placed to help businesses leverage early detection of threats. Perhaps this is why, IT professionals listed cybersecurity as the most-likely adopted use of AI in their organizations.

One thing to note: it’s important that enterprises gradually introduce their employees to machine learning capabilities in productivity apps as not to undermine the familiarity of the user experience or turn employees off in fear of privacy violations. In this respect, the advent of AI into work activities resembles consumer apps like YouTube, Maps, Spotify or Amazon, where the technology is subtle to users who may not be aware of cognitive features. The fact that 54 percent of employees in our survey stated they don’t use AI in their personal life, despite the widespread use AI these successful apps, is an important illustration.

How your company can prepare for change

Businesses of all shapes and sizes need to prepare for one of the most important technology shifts of our generation. For those who have yet to get started, here are a few things to consider:

  1. Introduce your employees to AI in collaboration tools early. New, assistive AI features in collaboration software help employees get familiar with the technology and its benefits. Smart email, improved document access and search, chatbots and speech assistants will all be important and accessible technologies that can save employees time, improve workflows and enhance employee experiences.
  2. Take advantage of tools that use AI for data security. Rising data breaches and insider threats, coupled with the growing use of cloud and mobile applications, means the integrity of company data is consistently at risk. Security products that incorporate machine learning-based threat intelligence and anomaly detection should be a key priority.
  3. Don’t neglect change management. New collaboration tools that use AI have a high impact on organizational culture, but not all employees will be immediately supportive of this new way of working. While our surveys reveal employees are generally positive on AI, there is still much fear and confusion surrounding AI as a source of job displacement. Be mindful of the impact of change management, specifically the importance of good communication, training and, above all, employee engagement throughout the process.

AI will no doubt face some challenges over the next few years as it enters the workplace, but sentiment is changing away from doom-and-gloom scenarios towards understanding how the technology can be used more effectively to assist humans and enable smarter work. 

It will be fascinating to see how businesses and technology markets transform as AI matures in the coming years.

An AI Resident at work: Suhani Vora and her work on genomics

Suhani Vora is a bioengineer, aspiring (and self-taught) machine learning expert, SNES Super Mario World ninja, and Google AI Resident. This means that she’s part of a 12-month research training program designed to jumpstart a career in machine learning. Residents, who are paired with Google AI mentors to work on research projects according to their interests, apply machine learning to their expertise in various backgrounds—from computer science to epidemiology.

I caught up with Suhani to hear more about her work as an AI Resident, her typical day, and how AI can help transform the field of genomics.

Phing: How did you get into machine learning research?

Suhani: During graduate school, I worked on engineering CRISPR/Cas9 systems, which enable a wide range of research on genomes. And though I was working with the most efficient tools available for genome editing, I knew we could make progress even faster.

One important factor was our limited ability to predict what novel biological designs would work. Each design cycle, we were only using very small amounts of previously collected data and relied on individual interpretation of that data to make design decisions in the lab.

By failing to incorporate more powerful computational methods to make use of big data and aid in the design process, it was affecting our ability to make progress quickly. Knowing that machine learning methods would greatly accelerate the speed of scientific discovery, I decided to work on finding ways to apply machine learning to my own field of genetic engineering.

I reached out to researchers in the field, asking how best to get started. A Googler I knew suggested I take the machine learning course by Andrew Ng on Coursera (could not recommend it more highly), so I did that. I’ve never had more fun learning! I had also started auditing an ML course at MIT, and reading papers on deep learning applications to problems in genomics. Ultimately, I took the plunge and and ended up joining the Residency program after finishing grad school.  

Tell us about your role at Google, and what you’re working on right now.

I’m a cross-disciplinary deep learning researcher—I research, code, and experiment with deep learning models to explore their applicability to problems in genomics.

In the same way that we use machine learning models to predict the objects are present in an image (think: searching for your dogs in Google Photos), I research ways we can build neural networks to automatically predict the properties of a DNA sequence. This has all kinds of applications, like predicting whether a DNA mutation will cause cancer, or is benign.

What’s a typical day like for you?

On any given day, I’m writing code to process new genomics data, or creating a neural network in TensorFlow to model the data. Right now, a lot of my time is spent troubleshooting such models.

I also spend time chatting with fellow Residents, or a member of the TensorFlow team, to get their expertise on the experiments or code I’m writing. This could include a meeting with my two mentors, Mark DePristo and Quoc Le, top researchers in the field of machine learning who regularly provide invaluable guidance for developing the neural network models I’m interested in.

  • IMG_20171109_134038.png
    Suhani heads to the whiteboard.
  • IMG_20171109_133246.png
    Just a normal day writing code to process new genomics data, or creating (and troubleshooting…) a neural network in TensorFlow to model the data.
  • IMG_20171109_132940.png
    Selfie with her mentors, Mark DePristo and Quoc Le
  • IMG_20171109_135458.png
    Eating lunch at her favorite Google cafe.

What do you like most about the AI Residency program? About working at Google?

I like the freedom to pursue topics of our interest, combined with the strong support network we have to get things done. Google is a really positive work environment, and I feel set up to succeed. In a different environment I wouldn’t have the chance to work with a world-class researcher in computational genomics like Mark, AND Quoc, one of the world’s leading machine learning researchers, at time same time and place. It’s pretty mind-blowing.

What kind of background do you need to work in machine learning?

We have such a wide array of backgrounds among our AI Residents! The only real common thread I see is a very strong desire to work on machine learning, or to apply machine learning to a particular problem of choice. I think having a strong background in linear algebra, statistics, computer science, and perhaps modeling makes things easier—but these skills are also now accessible to almost anyone with an interest, through MOOCs!

What kinds of problems do you think that AI can help solve for the world?

Ultimately, it really just depends how creative we are in figuring out what AI can do for us. Current deep learning methods have become state of the art for image recognition tasks, such as automatically detecting pets or scenes in images, and natural language processing, like translating from Chinese to English. I’m excited to see the next wave of applications in areas such as speech recognition, robotic handling, and medicine.

Interested in the AI Residency? Check out submission details and apply for the 2018 program on our Careers site.

Quill.org: better writing with machine learning

Editor’s note: TensorFlow, our open source machine learning library, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we’re sharing those stories here on Keyword. Here’s one of them.

Quill.org was founded by a group of educators and technologists to help students become better writers and critical thinkers. Before beginning development, they researched hundreds of studies on writing education and found a common theme—students had a hard time grasping the difference between a run-on sentence and a fragment. So the Quill team developed a tool to help students identify the different parts of a sentence, with a focus on real-time feedback.

Using the Quill tool, students complete a variety of exercises, including joining sentences, writing complex sentences, and explaining their use and understanding of grammar. The tool relies on a huge depository of sentence fragments, which Quill finds, recognizes and compiles using TensorFlow, Google’s open source machine learning library. TensorFlow technology is the backbone of the tool and can accurately detect if a student’s answers are correct. After completing the exercises, each student gets a customized explanation of incorrect responses, and the tool learns from each answer to create an individualized testing plan focused on areas of difficulty. Here’s an example of how it works:

More than 200,000 students—62 percent from low-income schools—have used Quill. They’ve collectively answered 20 million exercises, and Quill’s quick, personalized writing instruction has helped them master writing standards across the Common Core curriculum.

Teachers have also benefitted from introducing Quill in their classrooms. Each teacher has access to a customized portal, allowing them to see an individual student’s progress. Plus, by using machine learning, teachers have been spared hundreds of hours of manual grading. Laura, a teacher at Caswell Elementary School in California said, “Quill has been a wonderful tool for my third graders, many of whom are second language learners. We especially love the immediate feedback provided after each practice; it has definitely made us pay closer attention to detail.”

Quill’s most recent update is a “multiplayer” feature, allowing students to interact with each other in the tool. They can see their peers’ responses, which fosters spirited classroom discussions and collaboration, and helps students learn from each other.

While students aren’t using quills (or even pens!) anymore, strong writing skills are as important as ever. And with the help of machine learning, Quill makes it fun and engaging to develop those skills.

Fighting phishing with smarter protections

Editor’s note: October is Cybersecurity Awareness Month, and we’re celebrating with a series of security announcements this week. This is the third post; read the first and second ones.

Online security is top of mind for everyone these days, and we’re more focused than ever on protecting you and your data on Google, in the cloud, on your devices, and across the web.

One of our biggest focuses is phishing, attacks that trick people into revealing personal information like their usernames and passwords. You may remember phishing scams as spammy emails from “princes” asking for money via wire-transfer. But things have changed a lot since then. Today’s attacks are often very targeted—this is called “spear-phishing”—more sophisticated, and may even seem to be from someone you know.

Even for savvy users, today’s phishing attacks can be hard to spot. That’s why we’ve invested in automated security systems that can analyze an internet’s-worth of phishing attacks, detect subtle clues to uncover them, and help us protect our users in Gmail, as well as in other Google products, and across the web.

Our investments have enables us to significantly decrease the volume of phishing emails that users and customers ever see. With our automated protections, account security (like security keys) and warnings, Gmail is the most secure email service today.

Here is a look at some of the systems that have helped us secure users over time, and enabled us to add brand new protections in the last year.

More data helps protect your data

The best protections against large-scale phishing operations are even larger-scale defenses. Safe Browsing and Gmail spam filters are effective because they have such broad visibility across the web. By automatically scanning billions of emails, webpages, and apps for threats, they enable us to see the clearest, most up-to-date picture of the phishing landscape.

We’ve trained our security systems to block known issues for years. But, new, sophisticated phishing emails may come from people’s actual contacts (yes, attackers are able to do this), or include familiar company logos or sign-in pages. Here’s one example:

Screenshot 2017-10-11 at 2.45.09 PM.png

Attacks like this can be really difficult for people to spot. But new insights from our automated defenses have enabled us to immediately detect, thwart and protect Gmail users from subtler threats like these as well.

Smarter protections for Gmail users, and beyond

Since the beginning of the year, we’ve added brand new protections that have reduced the volume of spam in people’s inboxes even further.

  • We now show a warning within Gmail’s Android and iOS apps if a user clicks a link to a phishing site that’s been flagged by Safe Browsing. These supplement the warnings we’ve shown on the web since last year.
safelinks.png
  • We’ve built new systems that detect suspicious email attachments and submit them for further inspection by Safe Browsing. This protects all Gmail users, including G Suite customers, from malware that may be hidden in attachments.
  • We’ve also updated our machine learning models to specifically identify pages that look like common log-in pages and messages that contain spear-phishing signals.

Safe Browsing helps protect more than 3 billion devices from phishing, across Google and beyond. It hunts and flags malicious extensions in the Chrome Web Store, helps block malicious ads, helps power Google Play Protect, and more. And of course, Safe Browsing continues to show millions of red warnings about websites it considers dangerous or insecure in multiple browsers—Chrome, Firefox, Safari—and across many different platforms, including iOS and Android.

pastedImage0 (5).png

Layers of phishing protection

Phishing is a complex problem, and there isn’t a single, silver-bullet solution. That’s why we’ve provided additional protections for users for many years.

pasted image 0 (5).png
  • Since 2012, we’ve warned our users if their accounts are being targeted by government-backed attackers. We send thousands of these warnings each year, and we’ve continued to improve them so they are helpful to people. The warnings look like this.
  • This summer, we began to warn people before they linked their Google account to an unverified third-party app.
  • We first offered two-step verification in 2011, and later strengthened it in 2014 with Security Key, the most secure version of this type of protection. These features add extra protection to your account because attackers need more than just your username and password to sign in.

We’ll never stop working to keep your account secure with industry-leading protections. More are coming soon, so stay tuned.

Pixel Visual Core: image processing and machine learning on Pixel 2

The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML), so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we’ve evolved the algorithm that powers it over the past year to use the Pixel 2’s application processor efficiently, and enable you to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel, we’ve also been working on creating hardware capabilities that enable significantly greater computing power—beyond existing hardware—to bring HDR+ to third-party photography applications.

To expand the reach of HDR+, handle the most challenging imaging and ML applications, and deliver lower-latency and even more power-efficient HDR+ processing, we’ve created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed co-processor for consumer products. It’s built into every Pixel 2, and in the coming months, we’ll turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

pixel visual core
Magnified image of Pixel Visual Core

Let’s delve into the details for you technical folks out there: The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.

In the coming weeks, we’ll enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we’ll enable it for all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography that you already get through your Pixel 2 camera become available in your favorite photography apps.

  • Sidebyside2.jpg
    Pictures taken on Pixel 2 on a third-party app. Picture on the right is HDR+ on Pixel Visual Core
  • Sidebyside5.jpg
    Pictures taken on Pixel 2 on a third-party app. Picture on the right is HDR+ on Pixel Visual Core
  • Sidebyside1.jpg
    Pictures taken on Pixel 2 on a third-party app. Picture on the right is HDR+ on Pixel Visual Core
  • Sidebyside4.jpg
    Pictures taken on Pixel 2 on a third-party app. Picture on the right is HDR+ on Pixel Visual Core

HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!

Three Ways in which Stealthwatch Helps You Get More from Your Network Data

Do you know what the greatest Olympian of all time and Stealthwatch have in common? Both work harder and smarter for unbeatable performance. I recently heard from the one-and-only, Michael Phelps. He said that very early on, he and his coach set very high goals. And he knew that to achieve them, he had to […]

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).

Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.

You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Best commute ever? Ride along with Google execs Diane Greene and Fei-Fei Li

Editor’s Note: The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.

(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)

fei-fei and diane.gif

Fei-Fei: Are you getting excited for Grace Hopper?

Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.

Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?

Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.

It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.

Fei-Fei: So do you think it’s too late?

Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.

Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?

Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.

I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come.

Diane Greene

Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?

Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.

Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.

Diane: This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.

Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.

…young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families.

Dr. Fei-Fei Li

Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?

Fei-Fei: Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.

Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.

Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.

Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.

Diane: Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.

To hear more from Fei-Fei and Diane, tune into Grace Hopper’s live stream on October 4. 

Access information quicker, do better work with Google Cloud Search

We all get sidetracked at work. We intend to be as efficient as possible, but inevitably, the “busyness” of business gets in the way through back-to-back meetings, unfinished docs or managing a rowdy inbox. To be more efficient, you need quick access to your information like relevant docs, important tasks and context for your meetings.

Sadly, according to a report by McKinsey, workers spend up to 20 percent of their time—an entire day each week—searching for and consolidating information across a number of tools. We made Google Cloud Search available to Enterprise and Business edition customers earlier this year so that teams can access important information quicker. Here are a few ways that Cloud Search can help you get the information you need to accomplish more throughout your day.

1. Search more intuitively, access information quicker

If you search for a doc, you’re probably not going to remember its exact name or where you saved it in Drive. Instead, you might remember who sent the doc to you or a specific piece of information it contains, like a statistic.

A few weeks ago, we launched a new, more intuitive way to search in Cloud Search using natural language processing (NLP) technology. Type questions in Cloud Search using everyday language, like “Documents shared with me by John?,” “What’s my agenda next Tuesday?,” or “What docs need my attention?” and it will track down useful information for you.

NLP GIF

2. Prioritize your to-dos, use spare time more wisely

With so much work to do, deciding what to focus on and what to leave for later isn’t always simple. A study by McKinsey reports that only nine percent of executives surveyed feel “very satisfied” with the way they allocate their time. We think technology, like Cloud Search, should help you with more than just finding what you’re looking for—it should help you stay focused on what’s important.

Imagine if your next meeting gets cancelled and you suddenly have an extra half hour to accomplish tasks. You can open the Cloud Search app to help you focus on what’s important. Powered by machine intelligence, Cloud Search proactively surfaces information that it believes is relevant to you and organizes it into simple cards that appear in the app throughout your workday. For example, it suggests documents or tasks based on which documents need your attention or upcoming meetings you have in Google Calendar.

3. Prepare for meetings, get more out of them

Employees spend a lot of time in meetings. According to a study in the UK by the Centre for Economics and Business, office workers spend an average of four hours per week in meetings. It’s even normal for us to join meetings unprepared. The same group surveyed feels like nearly half of the time (47%) spent in meetings is unproductive.

Thankfully, Cloud Search can help. It uses machine intelligence to organize and present information to set you up for success in a meeting. In addition to surfacing relevant docs, Cloud Search also surfaces information about meeting attendees from your corporate directory, and even includes links to relevant conversations from Gmail.

Start by going into Cloud Search to see info related to your next meeting. If you’re interested in looking at another meeting later in the day, just click on “Today’s meetings” and it will show you your agenda for the day. Next, select an event in your agenda (sourced from your Calendar) and Cloud Search will recommend information that’s relevant to that meeting.

GIF 2

Take back your time and focus on what’s important—open the Cloud Search app and get started today, or ask your IT administrator to enable it in your domain. You can also learn more about how Cloud Search can help your teams here.

Now anyone can explore machine learning, no coding required

From helping you find your favorite dog photos, to helping farmers in Japan sort cucumbers, machine learning is changing the way people use code to solve problems. But how does machine learning actually work? We wanted to make it easier for people who are curious about this technology to learn more about it. So we created Teachable Machine, a simple experiment that lets you teach a machine using your camera—live in the browser, no coding required.

Teachable Machine is built with a new library called deeplearn.js, which makes it easier for any web developer to get into machine learning. It trains a neural net right in your browser—locally on your device—without sending any images to a server. We’ve also open sourced the code to help inspire others to make new experiments.

Check it out at g.co/teachablemachine.

A GIPHY engineering intern goes the GIF-stance with Google Cloud Vision

Editor’s Note: Today, we’re GIFted with the presence of a guest author. Bethany Davis, current University of Pennsylvania student and former software engineering summer intern at GIPHY, shares the details of her summer project, which was powered by Google Cloud Vision. This is a condensed and modified version of a post published on the GIPHY Engineering blog.

When my friend was starting her first full-time job, I wanted to GIF her a pep talk before her first day. I had the perfect movie reference in mind: Becca from “Bridesmaids” saying, “You are more beautiful than Cinderella! You smell like pine needles and have a face like sunshine!”

I searched GIPHY for “you are more beautiful than Cinderella” to no avail, then searched for “bridesmaids” and scrolled through several dozen results before giving up.

GiphySearch_2.png
Searching for Bridesmaids or the direct quote did not yield any useful results

It was easy to search for GIFs with popular tags, but because no one had tagged this GIF with the full line from the movie, I couldn’t find it. Yet I knew this GIF was out there. I wished there was a way to find the exact GIF that was pulled from the line in a movie, scene from a TV show or lyric from a song. Luckily, I was about to start my internship at GIPHY and I had the opportunity to tackle the problem head on—by using optical character recognition (OCR) and Google Cloud Vision to help you (and me) find the perfect GIF.

GIF me the tools and I’ll finish the job

When I started my internship, GIPHY engineers had already generated metadata about our collection of GIFs using Google Cloud Vision, an image recognition tool that is powered by machine learning. Specifically, Cloud Vision had performed optical character recognition (OCR) on our entire GIF library to detect text or captions within the image. The OCR results we got back from Google Cloud Vision were so good that my team was ready to incorporate the data directly into our search engine. I was tasked with parsing the data and indexing each GIF, then updating our search query to leverage the new, bolstered metadata.

Using Luigi I wrote a batch job that processed the JSON data generated from Google Cloud Vision. Then I used AWS Simple Queue Service to coordinate data transfer from Google Cloud Vision to documents in our search index. GIPHY search is built on top of Elasticsearch, which stores GIF documents; and the search query returns results based on the data in our Elasticsearch index. Bringing all these components together looks something like this:

GiphySearch_Workflow.png

One of the biggest challenges in building this update was ensuring that we could process data for millions of GIFs quickly. I had to learn how to optimize the runtime of the code that prepares GIF updates for Elasticsearch. My first iteration took 80+ hours, but eventually I got it to run in just eight.

Once all the data was indexed, the next step was to incorporate the text/caption metadata into our query. I used what’s called a match phrase query, which looks for words in the caption that appear in the same order as the words in the search input—guaranteeing that a substring of my movie quote is intact in the results. I also had to decide how much to weigh the data from Google Cloud Vision relative to other sources of data we have about a GIF (like its tags or the frequency with which users click on it) to determine the most relevant results.

It was time to see how the change would affect results. Using an internal GIPHY tool called Search UX, I searched for “where are the turtles,” a quote from “The Office.” The difference between the old query and the new one was dramatic:

GiphySearch_3.png

I also used a tool that examines the change on a larger scale by running the old and new queries against a random set of search terms—useful for ensuring that the change won’t disrupt popular searches like “cat” or “happy birthday,” which already deliver high-quality results.

See the GIFference

After our internal tools indicated a positive change, I launched the updated query as an A/B experiment. The results looked promising, with an overall increase in click-through rate of 0.5 percent. But my change affects a very specific type of search, especially longer phrases, and the impact of the change is even more noticeable for queries in this category. For example, click-through rate when searching for the phrase “never give up never surrender” (from “Galaxy Quest”) increased 32 percent, and click-through rate for the phrase “gotta be quicker than that” increased 31 percent. In addition to quotes from movies and TV shows, we saw improvements for general phrases like “everything will be ok” and “there you go.” The final click-through rate for these queries is almost 100 percent!

The ultimate test was my own, though. I revisited my search query from the beginning of the summer:

GiphySearch_4.png

Success! The search results are much improved. Now, the next time you use GIPHY to search for a specific scene or a direct quote, the results will show you exactly what you were looking for.

To learn more about the technical details behind my project, see the GIPHY Engineering blog.

Gamescom, Hot Chips, Hackathon and inclusivity – Weekend Reading: Aug. 25 edition

This week was awash in news out of gamescom 2017, a European trade fair for digital gaming culture that took place in Cologne, Germany. If your head is spinning from all the announcements, take a deep breath and devote the next three minutes to watching this video, with everything you need to know out of…

The post Gamescom, Hot Chips, Hackathon and inclusivity – Weekend Reading: Aug. 25 edition appeared first on The Official Microsoft Blog.