Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

Intelligent GIF Search: Finding the right GIF to express your emotions

The popularity of GIFs has risen, in large part because they afford users a unique way to express their emotions. That said, it can be challenging to find the perfect GIF to express yourself. You could be looking for a special expression from a favorite celebrity, an iconic moment from a movie/tv show or just a way to say ‘good morning’ that’s not mundane. To provide the best GIF search experience Bing employs techniques such as sentiment analysis, OCR, and even pose modeling of subjects appearing in GIF flicks to reflect subtle intent variations of your queries. Read on to find out more about how we made it work, and experience it yourself on Bing’s Image search by typing a query like angry panda. Here are some of the techniques we’ve used to make our GIF search more intelligent.
 

Vector Search and Word Embeddings for Images to Improve Recall

As we’ve previously done with Image Ranking, and Vector Based Search, we’ve gone beyond simple keyword matching techniques, and captured underlying semantic relationships. Using text and image embedding vectors, we first mapped the queries and images into a high-dimensional vector space and used similarity to improve recall. In simple words, vector-based search teaches the search engine that words like “amazing” “great”, “awesome”, and “fantastic” are semantically related. This allows us to retrieve documents not just by exact match, but also by match with semantically related terms.


GIF Summarization and OCR algorithms for improving precision

One reason GIF Search is more complicated than image search is that a GIF is composed of many images (frames), and therefore, you need to search through multiple images and not just the first, to check for relevance. For instance, a search for a cat GIF, a celebrity or a tv show or cartoon character GIF needs to ensure that the subject occurs in multiple frames of the GIF and not just the first. That’s complicated. Moreover, many users include phrases in their queries like “hello”, “good morning”, “Monday mornings” etc. – where we need to ensure that these textual messages are also included in the GIF. That’s complicated too, and that’s where our Optical Character Recognition (OCR) system comes into play. We use a deep-neural-network-based OCR system and we’ve added synthetic training data to better adapt to the GIF scenario.

The multi-frame nature of a GIF introduces additional complexities for OCR as well. For example, an OCR system would look at the images below, and detect four different pieces of text – “HA”, “HAVE”, “HAVE FU” and “HAVE FUN”. In fact there’s just one piece of text – “HAVE FUN”. We use text similarity combined with spatial and temporal information to disambiguate such cases.


Sentiment Analysis using text – to improve results quality

A common scenario for GIF search is emotion queries – where users are searching for GIFs that match a certain emotion (searches like “happy”, “sad”, “awesome, “great”, “angry” or “frustrated”). Here, we analyze the sentiment/emotion of the GIF query and try to provide GIF results that have matching sentiment. Query sentiment analysis is complicated because there are usually just 2-3 terms in queries, and they don’t always reflect emotions. To understand query sentiment, we’ve analyzed public community websites and learned billions of relationships between text and emojis.

To understand the sentiment for GIF documents, we analyze the text that surrounds the GIF documents on web pages. Having the sentiment for both the query and documents, we can match the sentiment of the user query and the results they see. For instance, if a user issues the query “good job”, and we’ve already detected text like “Good job 😊 😊 ” on chat sites, we would infer that “Good job” is a query with positive sentiment and choose the GIFs documents with positive sentiment.
 

Expressiveness, Pose and Awesomeness using CNNs

Celebrities are a major area of GIF searches. Given users are utilizing GIFs to express their emotions, it is a basic requirement that the top ranked celebrity GIFs convey strong emotions. Selecting GIFs that contain the right celebrity is easy but identifying GIFs that contain strong emotions or messages is hard. We do this by using deep learning models to analyze poses, actions, and expressiveness.

Poses and actions
Poses can be modeled using the positions of skeleton points, such as head, shoulder, hand etc. Actions can be modeled using the motion of these points across frames. To extract features to depict human poses and actions, we estimate the skeleton point positions in each frame and estimate the motion across adjacent frames. A full convolutional network is deployed for estimating each skeleton point of the upper body. The motion vectors of these skeleton points are extracted to depict the motion information. A final model deduces the ‘awesomeness’ by examining the poses and the actions in the GIF.

Exaggerated expressions
Here we analyze the facial expression of the subject to select results with more exaggerated facial expressions. We extract the expressions from multiple frames of the GIF and compute a score that indicates the level of exaggeratedness.  Our GIF search returns results that have more exaggerated facial expressions.

By pairing deep convolutional neural networks, expressiveness, poses, actions and exaggeratedness models with our huge celebrity database, we can return awesome results for celebrity searches.

Image graph and other techniques
In addition to helping understand semantic relationships, Image Graph also improves ranking quality. Image Graph is made up of several clusters of similar images (in this case, GIFs), and has historical data (for e.g. clickthrough rate etc.) for images. As shown in the graph below, the images within the same cluster are visually similar (the distance between images denotes similarity), and the distance between the clusters denotes visual similarity of the main images within the clusters. Now, if we know that an image in cluster D was extremely popular, we can propagate that clickthrough rate data to all other GIFs in cluster D. This greatly improves ranking quality. We can also improve the diversity of the recommended GIFs using this technique.

Finally, we also consider source authority, virality and popularity while deciding which GIFs to show on top. And, we have a detrimental content classifier (based on images, and another based on text) to remove offensive content to ensure that all our top results are clean.

There you have it – did you really imagine that so many machine learning techniques are required to make GIF ranking work? Altogether, these components bring intelligence to Bing’s GIF search experience, making it easier for users to find what they’re looking for. Give it a try on Bing image search.
 

AI and Machine Learning: Boosting Compliance and Preventing Spam

Some of the most advanced strategies in the current technology and analytics spaces include artificial intelligence and machine learning. These innovative approaches can hold nearly endless possibilities for technological applications: from the ability to eliminate manual work and enable software to make accurate predictions based on specific performance indicators.

 

In this way, it’s no surprise that AI and machine learning – utilized both individually and in conjunction with one another – are popping up in technologies that span industry sectors. As capabilities like these continue to bloom, it’s important for stakeholders and decision-makers to understand the ways in which these strategies can be leveraged within their business and the advantages they can provide.

To that end, let’s take a closer look at AI and machine learning and, specifically, the ways these approaches can support compliance with industry requirements and prevent spam messages.

AI and ML: Overlapping, but not interchangeable  

Before we delve into compliance and alleviating the problem of spam, it’s vital that technology and C-suite executives have a foundational understanding of AI and machine learning concepts. This is especially essential now that AI and machine learning overlap in more than a few areas and blur into the realm of interchangeable.

As TechRadar contributor Mike Moore noted, AI includes the use of robust algorithms to enable computers to complete tasks more accurately and efficiently than humans can, opening the door for automation and other key processes. AI allows hardware to “think for itself,” in a way, Moore explained.

Machine learning, on the other hand, takes this a step further and allows computers to not only complete tasks that used to require human intervention, but to also learn and advance based on the experiences of these tasks and the data used to complete them.

Technology expert Patrick Nguyen summed it up nicely for Adweek.

“AI is any technology that enables a system to demonstrate human-like intelligence,” Nguyen said. “Machine learning is one type of AI that uses mathematical models trained on data to make decisions. As more data becomes available, ML models can make better decisions.”

In this way, while AI and machine learning are often discussed in connection with one another, they are not identical concepts.

Overall, 15 percent of businesses currently use AI, while another 31 percent plan to use it within the next year, CME reported. Additionally, 47 percent of digitally mature companies already have an AI strategy in place.

Rising usage is also being seen with machine learning – The Enterprisers Project noted that 90 percent of business leaders agree that automation supported by machine learning will boost accuracy and decision-making. What’s more, 27 percent of executives have hired individuals with intelligent machine expertise to support their machine learning initiatives.

As AI and machine learning continue to emerge across enterprise software and important business strategies, it’s imperative to understand the difference between them and the use cases for these capabilities. Let’s examine a few examples, including AI’s use in compliance and how machine learning can help identify and eliminate spam.

Automation continues to provide companies of all kinds a means to modernize their businesses.

AI for GDPR compliance

The EU’s new data privacy measure, the General Data Protection Regulation (GDPR), went into effect in spring 2018 and left many businesses scrambling to shore up security according to its requirements. This fact was recently demonstrated through the use of an AI tool created by the European University Institute, which analyzed the privacy policies of 14 top technology companies. The tool, dubbed Claudette, showed that despite updates aligned with GDPR, organizations are still having trouble achieving compliance.

“One month after the GDPR was enforced, these were the results: from the total privacy policy sentences they evaluated, 11 percent were marked as unclear while 33.9 percent were identified as potentially problematic or provided insufficient information,” Trend Micro explained. “According to their report, none of the analyzed privacy policies meet the requirements of the GDPR.”

While the purpose of the report was to get a glimpse into how businesses are progressing with their compliance, the use of the AI Claudette tools also provides hope for the ways in which advanced capabilities like this can be used in the future. Currently, Claudette is only used in experimental capacities, but this could blaze a path for the use of AI to analyze compliance with GDPR and other industry regulations.

Machine learning to fight spam

Industry compliance is a continual challenge and important initiative for businesses. Another pressing problem is the persistence of spam messages, which, according to Trend Micro researcher Jon Oliver, can be fought with the use of advanced machine learning.

As Oliver explained, the processes to prevent spam messages require a considerable amount of data, which supports the actual learning necessary to enable a machine to identify and defend against spam. This presents quite the challenge, particularly as spam creators become savvier and evolve from the use of plain text messages to attachments and other approaches.

Thankfully, Trend Micro has been leveraging machine learning capabilities within the Trend Micro Anti-Spam Engine (TMASE) and Hosted Email Security (HES) for more than a decade now, building up a massive repository of quality datasets to support machine learning.

“Employed alongside other antispam protection layers (for example, Email Reputation Services, IP Profiler, antispam composite engine), machine learning algorithms were used to correlate threat information and perform in-depth file analysis to catch and keep spam off enterprise networks,” Oliver explained. “The strategy for using machine learning in antispam engines involved the use of state-of-the-art models, trust on an iterative method to improve the model’s accuracy, and the collection of accurately labeled data, which is a crucial part of the process.”

In this way, with each new spam message, machine learning-enabled prevention measures were able to learn a little more about current spam processes and approaches. And with spam representing a key threat to network and overall enterprise security, supporting an advanced way to identify and block spam before it reaches recipients is a considerable boon for data security.

Through Trend Micro’s antispam approach, which incorporates machine learning in conjunction with other protection technologies, researchers found that 95 percent of spam messages were effectively identified and blocked.

AI and ML for security

Artificial intelligence and machine learning will continue to emerge in an array of different settings, but they can be particularly beneficial for industry compliance and information security. As expert researchers like Oliver pointed out, however, it’s important to leverage these advanced processes as part of a layered security approach that incorporates other, established safeguarding measures.

To find out more about how AI and machine learning can benefit your network and infrastructure security, as well as how Trend Micro leverages machine learning in our TMASE and HES solutions, connect with our protection experts today.

The post AI and Machine Learning: Boosting Compliance and Preventing Spam appeared first on .

Samsung Electronics Wins at Two Top Global AI Machine Reading Comprehension Challenges

 

Samsung Research, the advanced R&D hub of Samsung Electronics’ SET (end-products) business, has ranked first in two of the world’s top global artificial intelligence (AI) machine reading comprehension competitions.

 

Samsung Research recently placed first in the MAchine Reading COmprehension (MS MARCO) Competition held by Microsoft (MS), as well as showing the best performance in TriviaQA* hosted by the University of Washington, proving the excellence of its AI algorithm.

 

With intense competition in developing AI technologies globally, machine reading comprehension competitions such as MS MARCO are booming around the world. MS MARCO and TriviaQA are among the actively researched and used machine reading comprehension competitions along with SQuAD of Stanford University and NarrativeQA of DeepMind. Distinguished universities around the world and global AI firms including Samsung are competing in these challenges.

 

Machine reading comprehension is where an AI algorithm is tasked with analyzing data and finding an optimum answer to a query on its own accord. For MS MARCO and TriviaQA, AI algorithms are tested in their capabilities of processing natural language in human Q&As and also providing written text in various types of documents such as news articles and blog posts.

 

For example in MS MARCO, ten web documents are presented for a certain query to let an AI algorithm create an optimum answer. Queries are randomly selected from a million queries from Bing (MS search engine) users. Answers are evaluated statistically by estimating how close they are with human answers. This is a test designed to apply an AI algorithm to solve real-world problems.

 

Samsung Research took part in the competitions with ConZNet, an AI algorithm developed by the company’s AI Center. ConZNet features skillful capabilities through adopting the Reinforcement Learning** technique, which advances machine intelligence by giving reasonable feedback for outcomes, similar to a stick-and-carrot strategy in a learning process.

 

With the recent acceleration in global competition to develop AI technologies, contests are widespread in areas of computer vision (technologies to analyze characters and images) and visual Q&A to solve problems using recognized images of characters as well as machine reading comprehension. The Beijing branch of Samsung Research won the International Conference on Document Analysis and Recognition (ICDAR) hosted by the International Association of Pattern Recognition (IAPR) in March, putting them in a top-tier group for global computer vision tests. The ICDAR is the most influential competition in Optical Character Recognition (OCR) technologies.

 

“We are developing an AI algorithm to provide answers to user queries in a simpler and more convenient manner, for real life purposes,” said Jihie Kim, Head of Language Understanding Lab at Samsung Research. “Active discussion is underway in Samsung to adopt the ConZNet AI algorithm for products, services, customer response and technological development.”

 

* Competitions such as MS MARCO and TriviaQA allow contestants to participate at any time, and rankings are altered according to real-time test results.

** The Reinforcement Learning is the most advanced Machine Learning AI algorithm, and cutting-edge AI technologies including AlphaGo are upgrading machine intelligence by applying this technique.

Microsoft to acquire Bonsai in move to build ‘brains’ for autonomous systems

Group shot of Bonsai's team members
Bonsai’s team members. Photo courtesy of Bonsai.

With AI’s meteoric rise, autonomous systems have been projected to grow to more than 800 million in operation by 2025. However, while envisioned in science fiction for a long time, truly intelligent autonomous systems are still elusive and remain a holy grail. The reality today is that training autonomous systems that function amidst the many unforeseen situations in the real world is very hard and requires deep expertise in AI — essentially making it unscalable.

To achieve this inflection point in AI’s growth, traditional machine learning methodologies aren’t enough. Bringing intelligence to autonomous systems at scale will require a unique combination of the new practice of machine teaching, advances in deep reinforcement learning and leveraging simulation for training. Microsoft has been on a path to make this a reality through continued AI research breakthroughs; the development of the powerful Azure AI platform of tools, services and infrastructure; advances in deep learning including our acquisition of Maluuba, and the impressive efficiencies we’ve achieved in simulation-based training with Microsoft Research’s AirSim tool. With software developers at the center of digital transformation, our pending acquisition of GitHub further underscores just how imperative it is that we empower developers to break  through and lead this next wave of innovation.

Today we are excited to take another major step forward in our vision to make it easier for developers and subject matter experts to build the “brains”— machine learning modelfor autonomous systems of all kinds with the signing of an agreement to acquire Bonsai. Based in Berkeley, California, and an M12 portfolio company, Bonsai has developed a novel approach using machine teaching that abstracts the low-level mechanics of machine learning, so that subject matter experts, regardless of AI aptitude, can specify and train autonomous systems to accomplish tasks. The actual training takes place inside a simulated environment.

The company is building a general-purpose, deep reinforcement learning platform especially suited for enterprises leveraging industrial control systems such as robotics, energy, HVAC, manufacturing and autonomous systems in general. This includes unique machine-teaching innovations, automated model generation and management, a host of APIs and SDKs for simulator integration, as well as pre-built support for leading simulations all packaged in one end-to-end platform.

Bonsai’s platform combined with rich simulation tools and reinforcement learning work in Microsoft Research becomes the simplest and richest AI toolchain for building any kind of autonomous system for control and calibration tasks. This toolchain will compose with Azure Machine Learning running on the Azure Cloud with GPUs and Brainwave, and models built with it will be deployed and managed in Azure IoT, giving Microsoft an end-to-end solution for building, operating and enhancing “brains” for autonomous systems.

What I find exciting is that Bonsai has achieved some remarkable breakthroughs with their approach that will have a profound impact on AI development. Last fall, they established a new reinforcement learning benchmark for programming industrial control systems. Using a robotics task to demonstrate the achievement, the platform successfully trained a simulated robotic arm to grasp and stack blocks on top of one another by breaking down the task into simpler sub-concepts. Their novel technique performed 45 times faster than a comparable approach from Google’s DeepMind. Then, earlier this year, they extended deep reinforcement learning’s capabilities beyond traditional game play, where it’s often demonstrated, to real-world applications. Using Bonsai’s AI Platform and machine teaching, subject matter experts from Siemens, with no AI expertise, trained an AI model to autocalibrate a Computer Numerical Control machine 30 times faster than the traditional approach. This represented a huge milestone in industrial AI, and the implications when considered across the broader sector are just staggering.

To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer. Bonsai has made tremendous progress here and Microsoft remains committed to furthering this work. We already deliver the most comprehensive collection of AI tools and services that make it easier for any developer to code and integrate pre-built and custom AI capabilities into applications and extend to any scenario. There are over a million developers using our pre-built Microsoft Cognitive Services, a collection of intelligent APIs that enable developers to easily leverage high-quality vision, speech, language, search and knowledge technologies in their apps with a few lines of code. And last fall, we led a combined industry push to foster a more open AI ecosystem, bringing AI advances to all developers, on any platform, using any language through the introduction of the Open Neural Network Exchange (ONNX) format and Gluon open source interface for deep learning.

We’re really confident this unique marriage of research, novel approach and technology will have a tremendous effect toward removing barriers and accelerating the current state of AI development. We look forward to having Bonsai and their team join us to help realize this collective vision.

The post Microsoft to acquire Bonsai in move to build ‘brains’ for autonomous systems appeared first on The Official Microsoft Blog.

Keeping 2 billion Android devices safe with machine learning

Posted by Sai Deep Tetali, Software Engineer, Google Play Protect

At Google I/O 2017, we introduced Google Play Protect, our comprehensive set of security services for Android. While the name is new, the smarts powering Play Protect have protected Android users for years.

Google Play Protect's suite of mobile threat protections are built into more than 2 billion Android devices, automatically taking action in the background. We're constantly updating these protections so you don't have to think about security: it just happens. Our protections have been made even smarter by adding machine learning elements to Google Play Protect.

Security at scale

Google Play Protect provides in-the-moment protection from potentially harmful apps (PHAs), but Google's protections start earlier.

Before they're published in Google Play, all apps are rigorously analyzed by our security systems and Android security experts. Thanks to this process, Android devices that only download apps from Google Play are 9 times less likely to get a PHA than devices that download apps from other sources.

After you install an app, Google Play Protect continues its quest to keep your device safe by regularly scanning your device to make sure all apps are behaving properly. If it finds an app that is misbehaving, Google Play Protect either notifies you, or simply removes the harmful app to keep your device safe.

Our systems scan over 50 billion apps every day. To keep on the cutting edge of security, we look for new risks in a variety of ways, such as identifying specific code paths that signify bad behavior, investigating behavior patterns to correlate bad apps, and reviewing possible PHAs with our security experts.

In 2016, we added machine learning as a new detection mechanism and it soon became a critical part of our systems and tools.

Training our machines

In the most basic terms, machine learning means training a computer algorithm to recognize a behavior. To train the algorithm, we give it hundreds of thousands of examples of that behavior.

In the case of Google Play Protect, we are developing algorithms that learn which apps are "potentially harmful" and which are "safe." To learn about PHAs, the machine learning algorithms analyze our entire catalog of applications. Then our algorithms look at hundreds of signals combined with anonymized data to compare app behavior across the Android ecosystem to find PHAs. They look for behavior common to PHAs, such as apps that attempt to interact with other apps on the device, access or share your personal data, download something without your knowledge, connect to phishing websites, or bypass built-in security features.

When we find apps exhibit similar malicious behavior, we group them into families. Visualizing these PHA families helps us uncover apps that share similarities to known bad apps, but have yet remained under our radar.

After we identify a new PHA, we confirm our findings with expert security reviews. If the app in question is a PHA, Google Play Protect takes action on the app and then we feed information about that PHA back into our algorithms to help find more PHAs.

Doubling down on security

So far, our machine learning systems have successfully detected 60.3% of the malware identified by Google Play Protect in 2017.

In 2018, we're devoting a massive amount of computing power and talent to create, maintain and improve these machine learning algorithms. We're constantly leveraging artificial intelligence and our highly skilled researchers and engineers from all across Google to find new ways to keep Android devices safe and secure. In addition to our talented team, we work with the foremost security experts and researchers from around the world. These researchers contribute even more data and insights to keep Google Play Protect on the cutting edge of mobile security.

To check out Google Play Protect, open the Google Play app and tap Play Protect in the left panel.

Acknowledgements: This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Damien Octeau, Sebastian Porst, Chuangang Ren, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, Zhikun Wang, and Mo Yu.

One student’s quest to track endangered whales with machine learning

Ever since I can remember, music has been a huge part of who I am. Growing up, my parents formed a traditional Mexican trio band and their music filled the rooms of my childhood home. I’ve always felt deeply moved by music, and I’m fascinated by the emotions music brings out in people.


When I attended community college and took my first physics course, I was introduced to the science of music—how it’s a complex assembly of overlapping sound waves that we sense from the resulting vibrations created in our eardrums. Though my parents had always taken an artistic approach to playing with soundwaves, I took a scientific one. Studying acoustics opened up all kinds of doors for me I never thought were possible, from pursuing a career in electrical engineering—to studying whale calls using machine learning.

Daniel with his family

Daniel with his family during move in day for his first quarter at Cal Poly.

I applied to the Monterey Bay Aquarium Research Institute (MBARI) summer internship program, where I learned about John Ryan and Danelle Cline’s research using machine learning (ML) to monitor whale sounds. Once again, I found myself fascinated by sound, this time by analyzing the sounds of endangered blue and fin whales to further understand their ecology. By identifying and tracking the whales’ calls and changing migration patterns, scientists hope to gain insight on the broader impacts of climate change on ocean ecology, and how human influence negatively impacts marine life.


MBARI had already collected thousands of hours of audio, but it would have proven too cumbersome of a task to sift through all of that data to find whale calls. That’s what led Danelle to introduce me to machine learning. ML enables us to pick out patterns from very large data sets like MBARI’s audio recordings. By training the model using TensorFlow, we can efficiently sift through the data and track these whales with 98 percent accuracy. This tracking system can tell us how many calls were made in any given amount of time near the Monterey Bay and will enable scientists at MBARI to track their changing migration behavior, and advance their research on whale ecology and how human influence above water negatively impacts marine life below.


What started as a passion for music ended in a love of engineering thanks to the opportunity at MBARI. Before community college I had no idea what an engineer even did, and I certainly never imagined my music background would be relevant in using TensorFlow to identify and classify whale calls within a sea of ocean audio data. But I soon learned there’s more than one way to pursue a passion, and I’m excited for what the future holds—for marine life, for machine learning, and for myself. Following the whales on their journey has led me to begin mine.

How TensorFlow is powering technology around the world

Editor’s Note: AI is behind many of Google’s products and is a big priority for us as a company (as you may have heard at Google I/O yesterday). So we’re sharing highlights on how AI already affects your life in ways you might not know, and how people from all over the world have used AI to build their own technology.

Machine learning is at the core of many of Google’s own products, but TensorFlow—our open source machine learning framework—has also been an essential component of the work of scientists, researchers and even high school students around the world. At Google I/O, we’re hearing from some of these people, who are solving big (we mean, big) problems—the origin of the universe, that sort of stuff. Here are some of the interesting ways they’re using TensorFlow to aid their work.

Ari Silburt, a Ph.D. student at Penn State University, wants to uncover the origins of our solar system. In order to do this, he has to map craters in the solar system, which helps him figure out where matter has existed in various places (and at various times) in the solar system. You with us? Historically, this process has been done by hand and is both time consuming and subjective, but Ari and his team turned to TensorFlow to automate it. They’ve trained the machine learning model using existing photos of the moon, and have identified more than 6,000 new craters.

pasted image 0 (16).png

On the left is a picture of the moon, hard to tell where the heck those craters are. On the right we have an accurate depiction of crater distribution thanks to TensorFlow.

Switching from outer space to the rainforests of Brazil: Topher White (founder of Rainforest Connection) invented “The Guardian” device to prevent illegal deforestation in the Amazon. The devices—which are upcycled cell phones running on Tensorflow—are installed in trees throughout the forest, recognize the sound of chainsaws and logging trucks, and alert the rangers who police the area. Without these devices, the land must be policed by people, which is nearly impossible given the massive area it covers.

pasted image 0 (17).png

Topher installs guardian devices in the tall trees of the Amazon

Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. In 2016, we announced that machine learning was being used to aid diagnostic efforts in the area of DR, by analyzing a patient’s fundus image (photo of the back of the eye) with higher accuracy. Now we’re taking those fundus images to the next level with TensorFlow. Dr. Jorge Cuadros, an optometrist in Oakland, CA, is able to determine a patient’s risk of cardiovascular disease by analyzing their fundus image with a deep learning model.

pasted image 0 (18).png

Fundus image of an eye with sight-threatening retinal disease. With machine learning this image will tell doctors much more than eye health.

Good news for green thumbs of the world, Shaza Mehdi and Nile Ravenell are high school students who developed PlantMD, an app that lets you figure out if your plant is diseased. The machine learning model runs on TensorFlow, and Shaza and Nile used data from plantvillage.com and a few university databases to train the model to recognize diseased plants. Shaza also built another app that uses a similar approach to diagnose skin disease.

Shaza developed PlantMD, an app that recognizes diseased plants

Shaza's story

To learn more about how AI can bring benefits to everyone, check out ai.google.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it's not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We're also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

‘All In’ on AI, Part 2: Driving the Evolution of 8K Picture Quality and Advanced Sound on TV through AI

What do you consider important when watching the Olympic Games on TV? Vivid picture and sound quality on your TV would be able to provide you a lifelike experience just as if you were onsite at the Games. The artificial intelligence (AI) technology which Samsung Electronics recently unveiled at CES 2018 promises to deliver this kind of experience, with picture quality nearly the equivalent of 8K (7,680 × 4,320) resolution, as well as optimized sound, for real-time and other video content.

 

The new AI technology achieves close to 8K resolution and enhanced sound quality by aligning with the unique characteristics of the specific content, a step up from typical upscaling technology used to improve image quality. So how exactly does it work? Let’s take a closer look.

 

 

Enjoy any content in 8K through Machine Learning

 

The world’s first 8K AI Technology that realizes a definition nearly the equivalent of 8K is based on Machine Learning. Computers or smartphones run according to directive values that humans enter. In contrast, Machine Learning refers to the way AI learns certain patterns and gives optimized answers based on various examples.

 

Samsung’s Machine Learning Super Resolution (MLSR) utilizes AI technology to compare low and high-quality versions of the same content to learn the technological differences between the two and form a vast database. It analyzes millions of pieces of video content and finds a correlation. Based on its analysis, it can select the optimum filters that support brightness, the level of blackness, spread and other errors from all inputs, and transform low-definition content to close to 8K high definition.

 

The input content is recognized as ‘real-time’ based on a frame and is enhanced by scenes, which makes it possible to upgrade image and sound quality immediately, regardless of whether the video source is live streaming or OTT (Over The Top).

 

 

8K picture quality through AI, what’s the difference?

 

Other upscaling techniques require human input to compare low-resolution and high-resolution scenes and find ways to reconstruct them. However, Samsung’s AI Technology enables it to self-study millions of images on its own using MLSR, allowing much-improved accuracy compared to conventional technologies.

 

There are three elements to improving picture quality on displays. First is ‘Detail Creation’ that sharpens the detail of expression and improves the texture to areas with low definition that have become blurry after compressing the file. Second, ‘Edge Restoration’ defines the edges of text, people or objects in a video, moving pixels on the borders to thin them down to increase legibility and visibility. For example, if text context is spreaded along the edges, the video will be adjusted around the text for clarity. In a video that shows the moon, Edge Restoration improves details of the moon’s shadow and enhances the darkness of the background for a clear distinction. Lastly, ‘Noise Reduction’ gets rid of static noises generated during high compression or recompression of files. In order to transmit an image, it is necessary to compress the image. In this process, various ‘noises’ such as a jiggling point or a squared dot can be effectively removed according to the image characteristics.

 

 

AI delivers immersive sound effects

 

When watching dramas or movies, realistic, immersive sounds are as important as picture quality. Samsung’s AI technology not only transforms low-definition into high-definition, but also optimizes the sound quality of content.

 

Conventional TVs provide multiple view settings such as movie mode and sports mode according to the genre of content. With Samsung’s new AI technology, content can be automatically analyzed by characterizing scenes to provide optimum sounds.

 

For example, let’s say you were watching a movie that includes musical performances. AI technology can highlight the music in a way that allows you to experience the sound as the actual characters would. When the crowd applauds after the performance is over, you would hear the clapping the same way as if you were in the crowd in the movie. When characters are speaking, AI adjusts the sound to make sure the lines are communicated clearly.

 

Imagine you were watching a relay broadcast of the 2018 PyeongChang Winter Olympics that started last week. AI will enhance the voice of the announcer so that you don’t miss who’s up next. When a game has started, AI will increase the background sound to deliver the liveliness of the actual game. With this tailored sound adjustment scene by scene, audiences can enjoy the best sound quality for any genre of content.

 

Samsung developers plan to continue to improve sound quality according to the preferences of individual viewers so that each viewer can enjoy the best TV viewing experience, right for them. Because volume patterns differ for every user, and the viewing environment can change according to the time of day and other factors, the sound will be accordingly adjusted and optimized to provide the most enjoyable experience to each individual viewer.

 

 

Why does 8K AI technology matter?

 

As customer needs for high-definition TVs and content increases, some terrestrial broadcasting stations have committed to working towards UHD delivery, and various IPTV and cable channels have initiated 4K UHD (3,840 × 2,160) services. However, even as the TV industry begins to launch 8K (7,680 × 4,320) TVs, the reality is that 4K content is, as of yet, still not fully utilized in homes.

 

In this context, Samsung has proposed a new direction for TV technology by combining 8K UHD display technology and premiere content through AI. Samsung has developed an AI algorithm that automatically enhances picture quality to solve the problem of limited high-quality content. As example, AI technology plays a key role in the 85-inch 8K QLED TV technology that Samsung introduced at CES 2018.

 

Samsung will begin the process of applying AI technology to its 8K QLED TVs from the second half of this year, and viewers will soon be able to enjoy UHD quality video that is nearly 8K in resolution and delivers optimized sound for any type of content. A genuine 8K era is now on the horizon, and Samsung will continue to lead the way.

Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease

The human immune system is an astonishing diagnostic system, continuously adapting itself to detect any signal of disease in the body. Essentially, the state of the immune system tells a story about virtually everything affecting a person’s health. It may sound like science fiction, but what if we could “read” this story? Our scientific understanding of human health would be fundamentally advanced. And more importantly, this would provide a foundation for a new generation of precise medical diagnostic and treatment options.

Photo of Peter Lee standing in front of a whiteboard covered in writing
Peter Lee, Corporate Vice President of AI + Research (Photo by Scott Eklund/Red Box Pictures)

Amazingly, this isn’t just science fiction, but can be science fact. And so we’re excited to announce a new partnership with Seattle-based Adaptive Biotechnologies, coupling the latest advances in AI and machine learning with recent breakthroughs in biotechnology to build a practical technology for mapping and decoding the human immune system. Together, we have a goal that is simple to state but also incredibly ambitious: create a universal blood test that reads a person’s immune system to detect a wide variety of diseases including infections, cancers and autoimmune disorders in their earliest stage, when they can be most effectively diagnosed and treated.

We believe deeply in the potential for this partnership with Adaptive and have made a substantial financial investment in the company. We have also begun a major research and development collaboration that involves Adaptive’s scientists working closely with our top researchers to use Adaptive’s innovative sequencing technology and Microsoft’s large-scale machine learning and cloud computing capabilities to make deep reading of the immune system a reality.

Adaptive CEO and co-founder Chad Robins said in a press release today this announcement comes at a time of inflection in healthcare and biotechnology, as we now have the technology to be able to map the immune system. The potential to help clinicians and researchers connect the dots and understand the relationship between disease states could eventually lead to a better understanding of overall human health.

Imagine a world with an “X-ray of the immune system.” This would open new doors to predictive medicine, as a person’s immunological history is believed to shape their response to new pathogens and treatments in ways that are currently impossible to explore. The impact on human health of such a universal blood test that reads a person’s exposure and response to disease would be, in a word, transformational.

Photo of lab worker's gloved hands working with immunosequencing kit

The immune system’s response to the presence of disease is expressed in the genetics of special cells, called T-cells and B-cells, which form the distributed command and control for the adaptive immune system. Each T-cell has a corresponding surface protein called a T-cell receptor (TCR), which has a genetic code that targets a specific signal of disease, or an antigen.

Mapping TCRs to antigens is a massive challenge, requiring very deep AI technology and machine learning capabilities coupled with emerging research and techniques in computational biology applied to genomics and immunosequencing. A challenge of this nature hasn’t been solved before, but with the collective team we’ve formed with Adaptive, we believe we have the experience, technical capability and tenacity to deliver.

The result would provide a true breakthrough – a detailed insight into what the immune system is doing. Put simply, sequencing the immune system can reveal what diseases the body currently is fighting or has ever fought. A blood sample, therefore, contains the key information needed to read what the immune system is currently detecting.

The basis of this approach is to develop a universal T-cell receptor/antigen map – a model of T-cell receptor sequences and the codes of the antigens they have fought. This universal map of the immune system will enable earlier and more accurate diagnosis of disease and eventually lead to a better understanding of overall human health. Microsoft and Adaptive expect this universal map to be the key for the research and development of simple blood-based diagnostics that are broadly accessible to people around the world.

We’re incredibly excited to collaborate on this project with our partners at Adaptive, who have developed unique immunosequencing capabilities and immune system knowledge, along with very large data sets of TCR sequences. Classifying and mapping this data represents a large-scale machine learning project for which we’ll lean heavily on Microsoft’s cloud computing capabilities and our elite research teams.

We know this partnership and the resulting work represent a big challenge. But we believe in the impact technology can have in healthcare, specifically how AI, the cloud and collaboration with our partners can come together and transform what is possible.

This project is a cornerstone of our Healthcare NExT initiative, with a goal to empower innovators and pair leading capabilities in life and computer sciences to dramatically accelerate the diagnosis and treatment of autoimmune disorders, cancer and infectious disease. At Microsoft, we believe that AI and the cloud have the power to transform healthcare – improving outcomes, providing better access and lowering costs. The Microsoft Healthcare NExT initiative was launched last year to maximize the ability of artificial intelligence and cloud computing to accelerate innovation in the healthcare industry, advance science through technology and turn the lifesaving potential of next discoveries into reality.

We’ll share more details at the upcoming JP Morgan Healthcare Conference in San Francisco, including a fireside chat at 5 p.m. PT on Wednesday, Jan. 10 with Chad Robins and myself called “Decoding the Human Immune System: A Closer Look at a Landmark Partnership.”

The post Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease appeared first on The Official Microsoft Blog.

Earth to exoplanet: Hunting for planets with machine learning

For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.

Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?

Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.

One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.

NASA_PlanetsPart1_v03_1000px.gif
The measured brightness of a star decreases ever so slightly when an orbiting planet blocks some of the light. The Kepler space telescope observed the brightness of 200,000 stars for 4 years to hunt for these characteristic signals caused by transiting planets.

Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.

I'm a Google AI researcher with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work on something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.

Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!

Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

NASA_PlanetsPart2_v05_750px.gif
We used 15,000 labeled Kepler signals to train our machine learning model to identify planet signals. We used this model to hunt for new planets in data from 670 stars, and discovered two planets missed in previous searches.

Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.

sol-&-kepler-2.gif
Kepler 90 is the first known 8-planet system outside of our own. In this system, planets orbit closer to their star, and Kepler 90i orbits once every 14 days. (Note that planet sizes and distances from stars are not to scale.)

The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!

Opening the Google AI China Center

Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world's top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us. 

The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems. 

Once again, the science of AI has no borders, neither do its benefits.

A look at one billion drawings from around the world

Since November 2016, people all around the world have drawn one billion doodles in Quick, Draw!, a web game where a neural network tries to recognize your drawings.


That includes 2.9 million cats, 2.9 million hot dogs, and 2.9 million drawings of snowflakes.

QuickDrawFaces_Blog_Snowflakes.gif

Each drawing is unique. But when you step back and look at one billion of them, the differences fade away. Turns out, one billion drawings can remind us of how similar we are.


Take drawings people made of faces. Some have eyebrows.

QuickDrawFaces_Blog_Eyebrows.gif

Some have ears.

QuickDrawFaces_Blog_Ears.gif

Some have hair.

QuickDrawFaces_Blog_Hair.gif

Some are round.

QuickDrawFaces_Blog_Round.gif

Some are oval.

QuickDrawFaces_Blog_Oval.gif

But when if you look at them all together and squint, you notice something interesting: Most people seem to draw faces that are smiling.

KeywordBlog_heroimage.png

These sorts of interesting patterns emerge with lots of drawings. Like how people all over the world have trouble drawing bicycles.

Blog_Bicycle_gridimage.png

With some exceptions from the rare bicycle-drawing experts.

QuickDrawFaces_Blog_GoodBikes.gif

If you overlay these drawings, you’ll also notice some interesting patterns based on geography. Like the directions that chairs might point:

Or the number of scoops you might get on an ice cream cone.

QuickDraw_Overlay_Images_icecream.png

(Source: Kyle McDonald)

And the strategy you might use to draw a star.

QuickDraw_Overlay_Images_stars.png

Still, no matter the drawing method, over the last 12 months, people have drawn more stars in Quick, Draw! than there are actual stars visible to the naked eye in the night sky.

stars

If there’s one thing one billion drawings has taught us, it’s that no matter who we are or where we’re from, we’re united by the fun of making silly drawings of the things around us.


Quick, Draw! began as a simple way to let anyone play with machine learning. But these billions of drawings are also a valuable resource for improving machine learning. Researchers at Google have used them to train models like sketch-rnn, which lets people draw with a neural network. And the data we gathered from the game powers tools like AutoDraw, which pairs machine learning with drawings from talented artists to help everyone create anything visual, fast.


There is so much we have yet to discover. To explore a subset of the billion drawings, visit our open dataset. To learn more about how Quick, Draw! was built, read this post. And to draw your own star (or ice cream cone, or bicycle), play a round of Quick, Draw!

Pivot to the cloud: intelligent features in Google Sheets help businesses uncover insights

When it comes to data in spreadsheets, deciphering meaningful insights can be a challenge whether you’re a spreadsheet guru or data analytics pro. But thanks to advances in the cloud and artificial intelligence, you can instantly uncover insights and empower everyone in your organization—not just those with technical or analytics backgrounds—to make more informed decisions.

We launched "Explore" in Sheets to help you decipher your data easily using the power of machine intelligence, and since then we’ve added even more ways for you to intelligently visualize and share your company data. Today, we’re announcing additional features to Google Sheets to help businesses make better use of their data, from pivot tables and formula suggestions powered by machine intelligence, to even more flexible ways to help you analyze your data.

Easier pivot tables, faster insights

Many teams rely on pivot tables to summarize massive data sets and find useful patterns, but creating them manually can be tricky. Now, if you have data organized in a spreadsheet, Sheets can intelligently suggest a pivot table for you.


In the Explore panel, you can also ask questions of your data using everyday language (via natural language processing) and have the answer returned as a pivot table. For example, type “what is the sum of revenue by salesperson?” or “how much revenue does each product category generate?” and Sheets can help you find the right pivot table analysis.

GIF

In addition, if you want to create a pivot table from scratch, Sheets can suggest a number of relevant tables in the pivot table editor to help you summarize your data faster.

Suggested formulas, quicker answers

We often use basic spreadsheet formulas like =SUM or =AVERAGE for data analysis, but it takes time to make sure all inputs are written correctly. Soon, you may notice suggestions pop up when you type “=” in a cell. Using machine intelligence, Sheets provides full formula suggestions to you based on contextual clues from your spreadsheet data. We designed this to help teams save time and get answers more intuitively.

Formula suggestions in Sheets

Even more Sheets features

We’re also adding more features to make Sheets even better for data analysis:

  • Check out a refreshed UI for pivot tables in Sheets, and new, customizable headings for rows and columns.
  • View your data differently with new pivot table features. When you create a pivot table, you can “show values as a % of totals” to see summarized values as a fraction of grand totals. Once you have a table, you can right-click on a cell to “view details” or even combine pivot table groups to aggregate data the way you need it. We’re also adding new format options, like repeated row labels, to give you more fine-tuned control of how to present your summarized data.
  • Create and edit waterfall charts. Waterfall charts are good for visualizing sequential changes in data, like if you want to see the incremental breakdown of last year’s revenue month-by-month. Select Insert > Chart > Chart type picker and then choose “waterfall.”
  • Quickly import or paste fixed-width formatted data files. Sheets will automatically split up the data into columns for you without needing a delimiter, like commas, between data.

These new Sheets features will roll out in the coming weeks—see specific details here. To learn more about how G Suite can help your business uncover valuable insights and speed up efficiencies, visit the G Suite website. Or check out these tips to help you get started with Sheets.

Get ready for AI to help make your business more productive

Editor’s note: Companies are evaluating how to use artificial intelligence to transform how they work. Nicholas McQuire, analyst at CCS Insight, reflects on how businesses are using machine learning and assistive technologies to help employees be more productive. He also provides tangible takeaways on how enterprises can better prepare for the future of work.

Employees are drowning in a sea of data and sprawling digital tools, using an average of 6.1 mobile apps for work purposes today, according to a recent CCS Insight survey of IT decision-makers. Part of the reason we’ve seen a lag in macro productivity since the 2008 financial crisis is that we waste a lot of time doing mundane tasks, like searching for data, booking meetings and learning the ins and outs of complex software.

According to Harvard Business Review, wasted time and inefficient processes—what experts call "organizational drag"—cost the U.S. economy a staggering $3 trillion each year. Employees need more assistive and personalized technology to help them tackle organizational drag and work faster and smarter.

Over the next five years, artificial intelligence (AI) will change the way we work and, in the process, transform businesses.

The arrival of AI in the enterprise is quickening

I witnessed a number of proofs of concept in machine learning in 2017; many speech-and image-based cognitive applications are emerging in specific markets, like fraud detection in finance, low-level contract analysis in the legal sector and personalization in retail. There are also AI applications emerging in corporate functions such as IT support, human resources, sales and customer service.

This shows promise for the technology, particularly in the face of challenges like trust, complexity, security and training required for machine learning systems. But it also suggests that the arrival of AI in enterprises could be moving more quickly than we think.

According to the same study, 58 percent of respondents said they are either using, trialling or researching the technology in their business. Decision-makers also said that on average, 29 percent of their applications will be enhanced with AI within the next two years—a remarkably bullish view.

New opportunities for businesses to evolve productivity

In this context, new AI capabilities pose exciting opportunities to evolve productivity and collaboration.

  • Assistive software: In the past year, assistive, cognitive features have become more prevalent in productivity software, such as search, quicker access to documents, automated email replies and virtual assistants. These solutions help surface contextually relevant information for employees and can automate simple, time-consuming tasks, like scheduling meetings, creating help desk tickets, booking conference rooms or summarizing content. In the future, they might also help firms improve and manage employee engagement, a critical human resources and leadership challenge at the moment.
  • Natural language processing: It won’t be long before we also see the integration of voice or natural language processing in productivity apps. The rise of speech-controlled smart speakers such as Google Home, Amazon Echo or the recently-launched Alexa for Business show that creating and completing documents using speech dictation, or using natural language queries to parse data or control functions in spreadsheets, is no longer in the realm of science fiction.
  • Security: Perhaps one of the biggest uses of AI will be to protect company information. Companies are beginning to use AI to protect against spam, phishing and malware in email, as well as the alarming rise of data breaches across the globe; the use of AI to detect threats and improve incident response will likely rise exponentially. Cloud security vendors with access to higher volumes of signals to train AI models are well placed to help businesses leverage early detection of threats. Perhaps this is why, IT professionals listed cybersecurity as the most-likely adopted use of AI in their organizations.

One thing to note: it’s important that enterprises gradually introduce their employees to machine learning capabilities in productivity apps as not to undermine the familiarity of the user experience or turn employees off in fear of privacy violations. In this respect, the advent of AI into work activities resembles consumer apps like YouTube, Maps, Spotify or Amazon, where the technology is subtle to users who may not be aware of cognitive features. The fact that 54 percent of employees in our survey stated they don't use AI in their personal life, despite the widespread use AI these successful apps, is an important illustration.

How your company can prepare for change

Businesses of all shapes and sizes need to prepare for one of the most important technology shifts of our generation. For those who have yet to get started, here are a few things to consider:

  1. Introduce your employees to AI in collaboration tools early. New, assistive AI features in collaboration software help employees get familiar with the technology and its benefits. Smart email, improved document access and search, chatbots and speech assistants will all be important and accessible technologies that can save employees time, improve workflows and enhance employee experiences.
  2. Take advantage of tools that use AI for data security. Rising data breaches and insider threats, coupled with the growing use of cloud and mobile applications, means the integrity of company data is consistently at risk. Security products that incorporate machine learning-based threat intelligence and anomaly detection should be a key priority.
  3. Don’t neglect change management. New collaboration tools that use AI have a high impact on organizational culture, but not all employees will be immediately supportive of this new way of working. While our surveys reveal employees are generally positive on AI, there is still much fear and confusion surrounding AI as a source of job displacement. Be mindful of the impact of change management, specifically the importance of good communication, training and, above all, employee engagement throughout the process.

AI will no doubt face some challenges over the next few years as it enters the workplace, but sentiment is changing away from doom-and-gloom scenarios towards understanding how the technology can be used more effectively to assist humans and enable smarter work. 

It will be fascinating to see how businesses and technology markets transform as AI matures in the coming years.

An AI Resident at work: Suhani Vora and her work on genomics

Suhani Vora is a bioengineer, aspiring (and self-taught) machine learning expert, SNES Super Mario World ninja, and Google AI Resident. This means that she’s part of a 12-month research training program designed to jumpstart a career in machine learning. Residents, who are paired with Google AI mentors to work on research projects according to their interests, apply machine learning to their expertise in various backgrounds—from computer science to epidemiology.

I caught up with Suhani to hear more about her work as an AI Resident, her typical day, and how AI can help transform the field of genomics.

Phing: How did you get into machine learning research?

Suhani: During graduate school, I worked on engineering CRISPR/Cas9 systems, which enable a wide range of research on genomes. And though I was working with the most efficient tools available for genome editing, I knew we could make progress even faster.

One important factor was our limited ability to predict what novel biological designs would work. Each design cycle, we were only using very small amounts of previously collected data and relied on individual interpretation of that data to make design decisions in the lab.

By failing to incorporate more powerful computational methods to make use of big data and aid in the design process, it was affecting our ability to make progress quickly. Knowing that machine learning methods would greatly accelerate the speed of scientific discovery, I decided to work on finding ways to apply machine learning to my own field of genetic engineering.

I reached out to researchers in the field, asking how best to get started. A Googler I knew suggested I take the machine learning course by Andrew Ng on Coursera (could not recommend it more highly), so I did that. I’ve never had more fun learning! I had also started auditing an ML course at MIT, and reading papers on deep learning applications to problems in genomics. Ultimately, I took the plunge and and ended up joining the Residency program after finishing grad school.  

Tell us about your role at Google, and what you’re working on right now.

I’m a cross-disciplinary deep learning researcher—I research, code, and experiment with deep learning models to explore their applicability to problems in genomics.

In the same way that we use machine learning models to predict the objects are present in an image (think: searching for your dogs in Google Photos), I research ways we can build neural networks to automatically predict the properties of a DNA sequence. This has all kinds of applications, like predicting whether a DNA mutation will cause cancer, or is benign.

What’s a typical day like for you?

On any given day, I’m writing code to process new genomics data, or creating a neural network in TensorFlow to model the data. Right now, a lot of my time is spent troubleshooting such models.

I also spend time chatting with fellow Residents, or a member of the TensorFlow team, to get their expertise on the experiments or code I’m writing. This could include a meeting with my two mentors, Mark DePristo and Quoc Le, top researchers in the field of machine learning who regularly provide invaluable guidance for developing the neural network models I’m interested in.

What do you like most about the AI Residency program? About working at Google?

I like the freedom to pursue topics of our interest, combined with the strong support network we have to get things done. Google is a really positive work environment, and I feel set up to succeed. In a different environment I wouldn’t have the chance to work with a world-class researcher in computational genomics like Mark, AND Quoc, one of the world’s leading machine learning researchers, at time same time and place. It’s pretty mind-blowing.

What kind of background do you need to work in machine learning?

We have such a wide array of backgrounds among our AI Residents! The only real common thread I see is a very strong desire to work on machine learning, or to apply machine learning to a particular problem of choice. I think having a strong background in linear algebra, statistics, computer science, and perhaps modeling makes things easier—but these skills are also now accessible to almost anyone with an interest, through MOOCs!

What kinds of problems do you think that AI can help solve for the world?

Ultimately, it really just depends how creative we are in figuring out what AI can do for us. Current deep learning methods have become state of the art for image recognition tasks, such as automatically detecting pets or scenes in images, and natural language processing, like translating from Chinese to English. I’m excited to see the next wave of applications in areas such as speech recognition, robotic handling, and medicine.

Interested in the AI Residency? Check out submission details and apply for the 2018 program on our Careers site.

Quill.org: better writing with machine learning

Editor’s note: TensorFlow, our open source machine learning library, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we’re sharing those stories here on Keyword. Here’s one of them.

Quill.org was founded by a group of educators and technologists to help students become better writers and critical thinkers. Before beginning development, they researched hundreds of studies on writing education and found a common theme—students had a hard time grasping the difference between a run-on sentence and a fragment. So the Quill team developed a tool to help students identify the different parts of a sentence, with a focus on real-time feedback.

Using the Quill tool, students complete a variety of exercises, including joining sentences, writing complex sentences, and explaining their use and understanding of grammar. The tool relies on a huge depository of sentence fragments, which Quill finds, recognizes and compiles using TensorFlow, Google's open source machine learning library. TensorFlow technology is the backbone of the tool and can accurately detect if a student’s answers are correct. After completing the exercises, each student gets a customized explanation of incorrect responses, and the tool learns from each answer to create an individualized testing plan focused on areas of difficulty. Here's an example of how it works:

More than 200,000 students—62 percent from low-income schools—have used Quill. They’ve collectively answered 20 million exercises, and Quill’s quick, personalized writing instruction has helped them master writing standards across the Common Core curriculum.

Teachers have also benefitted from introducing Quill in their classrooms. Each teacher has access to a customized portal, allowing them to see an individual student’s progress. Plus, by using machine learning, teachers have been spared hundreds of hours of manual grading. Laura, a teacher at Caswell Elementary School in California said, "Quill has been a wonderful tool for my third graders, many of whom are second language learners. We especially love the immediate feedback provided after each practice; it has definitely made us pay closer attention to detail.”

Quill’s most recent update is a “multiplayer” feature, allowing students to interact with each other in the tool. They can see their peers’ responses, which fosters spirited classroom discussions and collaboration, and helps students learn from each other.

While students aren’t using quills (or even pens!) anymore, strong writing skills are as important as ever. And with the help of machine learning, Quill makes it fun and engaging to develop those skills.

Fighting phishing with smarter protections

Editor’s note: October is Cybersecurity Awareness Month, and we're celebrating with a series of security announcements this week. This is the third post; read the first and second ones.

Online security is top of mind for everyone these days, and we’re more focused than ever on protecting you and your data on Google, in the cloud, on your devices, and across the web.


One of our biggest focuses is phishing, attacks that trick people into revealing personal information like their usernames and passwords. You may remember phishing scams as spammy emails from “princes” asking for money via wire-transfer. But things have changed a lot since then. Today’s attacks are often very targeted—this is called “spear-phishing”—more sophisticated, and may even seem to be from someone you know.


Even for savvy users, today’s phishing attacks can be hard to spot. That’s why we’ve invested in automated security systems that can analyze an internet’s-worth of phishing attacks, detect subtle clues to uncover them, and help us protect our users in Gmail, as well as in other Google products, and across the web.


Our investments have enables us to significantly decrease the volume of phishing emails that users and customers ever see. With our automated protections, account security (like security keys) and warnings, Gmail is the most secure email service today.


Here is a look at some of the systems that have helped us secure users over time, and enabled us to add brand new protections in the last year.

More data helps protect your data


The best protections against large-scale phishing operations are even larger-scale defenses. Safe Browsing and Gmail spam filters are effective because they have such broad visibility across the web. By automatically scanning billions of emails, webpages, and apps for threats, they enable us to see the clearest, most up-to-date picture of the phishing landscape.


We’ve trained our security systems to block known issues for years. But, new, sophisticated phishing emails may come from people’s actual contacts (yes, attackers are able to do this), or include familiar company logos or sign-in pages. Here’s one example:

Screenshot 2017-10-11 at 2.45.09 PM.png

Attacks like this can be really difficult for people to spot. But new insights from our automated defenses have enabled us to immediately detect, thwart and protect Gmail users from subtler threats like these as well.

Smarter protections for Gmail users, and beyond

Since the beginning of the year, we’ve added brand new protections that have reduced the volume of spam in people’s inboxes even further.

  • We now show a warning within Gmail’s Android and iOS apps if a user clicks a link to a phishing site that’s been flagged by Safe Browsing. These supplement the warnings we’ve shown on the web since last year.

safelinks.png

  • We’ve built new systems that detect suspicious email attachments and submit them for further inspection by Safe Browsing. This protects all Gmail users, including G Suite customers, from malware that may be hidden in attachments.
  • We’ve also updated our machine learning models to specifically identify pages that look like common log-in pages and messages that contain spear-phishing signals.

Safe Browsing helps protect more than 3 billion devices from phishing, across Google and beyond. It hunts and flags malicious extensions in the Chrome Web Store, helps block malicious ads, helps power Google Play Protect, and more. And of course, Safe Browsing continues to show millions of red warnings about websites it considers dangerous or insecure in multiple browsers—Chrome, Firefox, Safari—and across many different platforms, including iOS and Android.

pastedImage0 (5).png

Layers of phishing protection


Phishing is a complex problem, and there isn’t a single, silver-bullet solution. That’s why we’ve provided additional protections for users for many years.

pasted image 0 (5).png
  • Since 2012, we’ve warned our users if their accounts are being targeted by government-backed attackers. We send thousands of these warnings each year, and we’ve continued to improve them so they are helpful to people. The warnings look like this.
  • This summer, we began to warn people before they linked their Google account to an unverified third-party app.
  • We first offered two-step verification in 2011, and later strengthened it in 2014 with Security Key, the most secure version of this type of protection. These features add extra protection to your account because attackers need more than just your username and password to sign in.

We’ll never stop working to keep your account secure with industry-leading protections. More are coming soon, so stay tuned.

Pixel Visual Core: image processing and machine learning on Pixel 2

The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML), so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we’ve evolved the algorithm that powers it over the past year to use the Pixel 2’s application processor efficiently, and enable you to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel, we’ve also been working on creating hardware capabilities that enable significantly greater computing power—beyond existing hardware—to bring HDR+ to third-party photography applications.

To expand the reach of HDR+, handle the most challenging imaging and ML applications, and deliver lower-latency and even more power-efficient HDR+ processing, we’ve created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed co-processor for consumer products. It’s built into every Pixel 2, and in the coming months, we’ll turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

pixel visual core
Magnified image of Pixel Visual Core

Let's delve into the details for you technical folks out there: The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.


In the coming weeks, we’ll enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we’ll enable it for all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography that you already get through your Pixel 2 camera become available in your favorite photography apps.

HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!

Three Ways in which Stealthwatch Helps You Get More from Your Network Data

Do you know what the greatest Olympian of all time and Stealthwatch have in common? Both work harder and smarter for unbeatable performance. I recently heard from the one-and-only, Michael Phelps. He said that very early on, he and his coach set very high goals. And he knew that to achieve them, he had to […]

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Best commute ever? Ride along with Google execs Diane Greene and Fei-Fei Li

Editor’s Note: The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.

(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)

fei-fei and diane.gif

Fei-Fei: Are you getting excited for Grace Hopper?

Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.

Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?

Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.

It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.

Fei-Fei: So do you think it’s too late?

Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.

Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?

Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.

I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. Diane Greene

Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?

Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.

Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.

Diane: This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.

Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.

...young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. Dr. Fei-Fei Li

Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?

Fei-Fei: Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.

Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.

Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.

Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.

Diane: Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.

To hear more from Fei-Fei and Diane, tune into Grace Hopper’s live stream on October 4. 

Access information quicker, do better work with Google Cloud Search

We all get sidetracked at work. We intend to be as efficient as possible, but inevitably, the “busyness” of business gets in the way through back-to-back meetings, unfinished docs or managing a rowdy inbox. To be more efficient, you need quick access to your information like relevant docs, important tasks and context for your meetings.

Sadly, according to a report by McKinsey, workers spend up to 20 percent of their time—an entire day each week—searching for and consolidating information across a number of tools. We made Google Cloud Search available to Enterprise and Business edition customers earlier this year so that teams can access important information quicker. Here are a few ways that Cloud Search can help you get the information you need to accomplish more throughout your day.

1. Search more intuitively, access information quicker

If you search for a doc, you’re probably not going to remember its exact name or where you saved it in Drive. Instead, you might remember who sent the doc to you or a specific piece of information it contains, like a statistic.

A few weeks ago, we launched a new, more intuitive way to search in Cloud Search using natural language processing (NLP) technology. Type questions in Cloud Search using everyday language, like “Documents shared with me by John?,” “What’s my agenda next Tuesday?,” or “What docs need my attention?” and it will track down useful information for you.
NLP GIF

2. Prioritize your to-dos, use spare time more wisely

With so much work to do, deciding what to focus on and what to leave for later isn’t always simple. A study by McKinsey reports that only nine percent of executives surveyed feel “very satisfied” with the way they allocate their time. We think technology, like Cloud Search, should help you with more than just finding what you’re looking for—it should help you stay focused on what’s important.

Imagine if your next meeting gets cancelled and you suddenly have an extra half hour to accomplish tasks. You can open the Cloud Search app to help you focus on what’s important. Powered by machine intelligence, Cloud Search proactively surfaces information that it believes is relevant to you and organizes it into simple cards that appear in the app throughout your workday. For example, it suggests documents or tasks based on which documents need your attention or upcoming meetings you have in Google Calendar.

3. Prepare for meetings, get more out of them

Employees spend a lot of time in meetings. According to a study in the UK by the Centre for Economics and Business, office workers spend an average of four hours per week in meetings. It’s even normal for us to join meetings unprepared. The same group surveyed feels like nearly half of the time (47%) spent in meetings is unproductive.

Thankfully, Cloud Search can help. It uses machine intelligence to organize and present information to set you up for success in a meeting. In addition to surfacing relevant docs, Cloud Search also surfaces information about meeting attendees from your corporate directory, and even includes links to relevant conversations from Gmail.

Start by going into Cloud Search to see info related to your next meeting. If you’re interested in looking at another meeting later in the day, just click on “Today’s meetings” and it will show you your agenda for the day. Next, select an event in your agenda (sourced from your Calendar) and Cloud Search will recommend information that’s relevant to that meeting.

GIF 2

Take back your time and focus on what’s important—open the Cloud Search app and get started today, or ask your IT administrator to enable it in your domain. You can also learn more about how Cloud Search can help your teams here.

Now anyone can explore machine learning, no coding required

From helping you find your favorite dog photos, to helping farmers in Japan sort cucumbers, machine learning is changing the way people use code to solve problems. But how does machine learning actually work? We wanted to make it easier for people who are curious about this technology to learn more about it. So we created Teachable Machine, a simple experiment that lets you teach a machine using your camera—live in the browser, no coding required.

Teachable Machine is built with a new library called deeplearn.js, which makes it easier for any web developer to get into machine learning. It trains a neural net right in your browser—locally on your device—without sending any images to a server. We’ve also open sourced the code to help inspire others to make new experiments.

Check it out at g.co/teachablemachine.

A GIPHY engineering intern goes the GIF-stance with Google Cloud Vision

Editor’s Note: Today, we’re GIFted with the presence of a guest author. Bethany Davis, current University of Pennsylvania student and former software engineering summer intern at GIPHY, shares the details of her summer project, which was powered by Google Cloud Vision. This is a condensed and modified version of a post published on the GIPHY Engineering blog.

When my friend was starting her first full-time job, I wanted to GIF her a pep talk before her first day. I had the perfect movie reference in mind: Becca from “Bridesmaids” saying, “You are more beautiful than Cinderella! You smell like pine needles and have a face like sunshine!”

I searched GIPHY for “you are more beautiful than Cinderella” to no avail, then searched for “bridesmaids” and scrolled through several dozen results before giving up.

GiphySearch_2.png
Searching for Bridesmaids or the direct quote did not yield any useful results

It was easy to search for GIFs with popular tags, but because no one had tagged this GIF with the full line from the movie, I couldn’t find it. Yet I knew this GIF was out there. I wished there was a way to find the exact GIF that was pulled from the line in a movie, scene from a TV show or lyric from a song. Luckily, I was about to start my internship at GIPHY and I had the opportunity to tackle the problem head on—by using optical character recognition (OCR) and Google Cloud Vision to help you (and me) find the perfect GIF.

GIF me the tools and I’ll finish the job

When I started my internship, GIPHY engineers had already generated metadata about our collection of GIFs using Google Cloud Vision, an image recognition tool that is powered by machine learning. Specifically, Cloud Vision had performed optical character recognition (OCR) on our entire GIF library to detect text or captions within the image. The OCR results we got back from Google Cloud Vision were so good that my team was ready to incorporate the data directly into our search engine. I was tasked with parsing the data and indexing each GIF, then updating our search query to leverage the new, bolstered metadata.

Using Luigi I wrote a batch job that processed the JSON data generated from Google Cloud Vision. Then I used AWS Simple Queue Service to coordinate data transfer from Google Cloud Vision to documents in our search index. GIPHY search is built on top of Elasticsearch, which stores GIF documents; and the search query returns results based on the data in our Elasticsearch index. Bringing all these components together looks something like this:

GiphySearch_Workflow.png

One of the biggest challenges in building this update was ensuring that we could process data for millions of GIFs quickly. I had to learn how to optimize the runtime of the code that prepares GIF updates for Elasticsearch. My first iteration took 80+ hours, but eventually I got it to run in just eight.

Once all the data was indexed, the next step was to incorporate the text/caption metadata into our query. I used what’s called a match phrase query, which looks for words in the caption that appear in the same order as the words in the search input—guaranteeing that a substring of my movie quote is intact in the results. I also had to decide how much to weigh the data from Google Cloud Vision relative to other sources of data we have about a GIF (like its tags or the frequency with which users click on it) to determine the most relevant results.

It was time to see how the change would affect results. Using an internal GIPHY tool called Search UX, I searched for “where are the turtles,” a quote from “The Office.” The difference between the old query and the new one was dramatic:

GiphySearch_3.png

I also used a tool that examines the change on a larger scale by running the old and new queries against a random set of search terms—useful for ensuring that the change won’t disrupt popular searches like “cat” or “happy birthday,” which already deliver high-quality results.

See the GIFference

After our internal tools indicated a positive change, I launched the updated query as an A/B experiment. The results looked promising, with an overall increase in click-through rate of 0.5 percent. But my change affects a very specific type of search, especially longer phrases, and the impact of the change is even more noticeable for queries in this category. For example, click-through rate when searching for the phrase “never give up never surrender” (from “Galaxy Quest”) increased 32 percent, and click-through rate for the phrase “gotta be quicker than that” increased 31 percent. In addition to quotes from movies and TV shows, we saw improvements for general phrases like “everything will be ok” and “there you go.” The final click-through rate for these queries is almost 100 percent!

The ultimate test was my own, though. I revisited my search query from the beginning of the summer:

GiphySearch_4.png

Success! The search results are much improved. Now, the next time you use GIPHY to search for a specific scene or a direct quote, the results will show you exactly what you were looking for.

To learn more about the technical details behind my project, see the GIPHY Engineering blog.

Gamescom, Hot Chips, Hackathon and inclusivity – Weekend Reading: Aug. 25 edition

This week was awash in news out of gamescom 2017, a European trade fair for digital gaming culture that took place in Cologne, Germany. If your head is spinning from all the announcements, take a deep breath and devote the next three minutes to watching this video, with everything you need to know out of Xbox, including pre-orders of the Xbox One X and the limited-edition Project Scorpio, the Xbox One S Minecraft limited-edition bundle, and news on games such as PlayerUnknown’s “Battlegrounds,” “State of Decay 2,” “Forza Motorsport 7,” “ReCore Definitive Edition,” “Assassin’s Creed Origins,” “Middle-earth: Shadow of War,” Xbox Backward Compatibility and more.

A picture of three different types of Xbox devices and controllers.

Gamescom wasn’t the only big event this week. At the Hot Chips 2017 conference, a cross-Microsoft team unveiled a new deep-learning acceleration platform, codenamed Project Brainwave, that was designed for real-time artificial intelligence. That means the system processes requests as fast as it receives them, with ultra-low latency, according to Doug Burger, a distinguished engineer at Microsoft. “Project Brainwave achieves a major leap forward in both performance and flexibility for cloud-based serving of deep learning models,” Burger wrote.

Speaking of artificial intelligence, it dominated this year’s Microsoft Hackathon last month, with a winning project that’s too hot to talk about. The Hackathon started in 2014 as an experiment to engage employees, and it’s now the world’s biggest private hackathon with more than 18,000 participants who showcase bold new ideas that influence company products and sometimes lead to entirely new services. This year’s winning team put together a project that’s “a compelling and practical use of artificial intelligence that we think our customers will love,” said Jeff Ramos, who leads the Microsoft Garage, the team that runs the Hackathon. “It’s so compelling that we’ve decided to be discreet in the amount of details we want to share.” Taken as a whole, this year’s group of hackers showed how quickly AI is becoming the fabric of how a new generation of technology services is delivered, with projects submitted for everything from self-driving wheelchairs to the prediction of traffic signal times.

A crowd of Hackathon 2017 participants cheer and wave at the camera outside on a sunny day.

It was also a big week for inclusivity at Microsoft. In conjunction with the US Business Leadership Network conference, which focuses on inclusive hiring, Microsoft employees shared personal stories showing the benefits of having a diverse population. Software engineer Swetha Machanavajhala, who was born with profound hearing loss, shared how she uses data and machine learning to help finds ways to enable people who are hearing impaired to better understand and react to the world around them. Amos Miller, who is blind and understands first-hand how important technology is for people with disabilities, said working for Microsoft’s Artificial Intelligence and Research team is like “working in a toy store.” Jessica Rafuse, a program manager with Microsoft Accessibility and an employment attorney by trade, shared about her experience being a champion for the inclusive hiring program.

And Beth Anne Katz, a program manager, explained how her long-hidden struggle with mental illness and depression reached a welcome turning point with the assistance of a “most incredible boss” and the Microsoft CARES employee assistance program.

Three Polaroid snapshots of Beth Anne Katz, hung on a red string with clothespins.

Discounted apps will help assuage the back-to-school blues, with savings of as much as 50 percent on heavy hitters such as Movie Edit Pro Windows Store Edition, Stagelight and Complete Anatomy. And don’t forget about next month’s STEM Saturdays workshops, as Microsoft partners with Mattel Hot Wheels® Speedometry™ for free, drop-in sessions that offer a chance to engage in science, technology and math projects hosted by Microsoft Stores. Participants will sensor-monitor cars as they compete on the iconic Hot Wheels orange track, measuring speed and force of impact during collision.

A group of children watch a demonstration of the Hot Wheels orange track game.

This week on our Facebook, Twitter, Instagram and LinkedIn pages, we featured the four all-women teams who participated in the #MakeWhatsNext Patent Program. From devices that convert text to braille in real-time to a VR game that can prevent bullying, these extraordinary women are changing the world with their groundbreaking inventions, and the support of the Patent Program.

Two women sit together, looking at a laptop screen.

Whew, summer may be winding down, but there’s still a lot going on. Enjoy this last weekend of August, and we hope to see you back here next week!

Posted by Susanna Ray

Microsoft News Center Staff

The post Gamescom, Hot Chips, Hackathon and inclusivity – Weekend Reading: Aug. 25 edition appeared first on The Official Microsoft Blog.

Scroll Up