What do you consider important when watching the Olympic Games on TV? Vivid picture and sound quality on your TV would be able to provide you a lifelike
The human immune system is an astonishing diagnostic system, continuously adapting itself to detect any signal of disease in the body. Essentially, the state of the immune system tells a story about virtually everything affecting a person’s health. It may sound like science fiction, but what if we could “read” this story? Our scientific understanding…
For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.
Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?
Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.
One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.
Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.
I’m a Google AI researcher with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work on something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.
Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!
Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.
Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.
Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.
The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!
Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world’s top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.
I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.
That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.
Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.
Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us.
The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems.
Once again, the science of AI has no borders, neither do its benefits.
Since November 2016, people all around the world have drawn one billion doodles in Quick, Draw!, a web game where a neural network tries to recognize your drawings.
That includes 2.9 million cats, 2.9 million hot dogs, and 2.9 million drawings of snowflakes.
Each drawing is unique. But when you step back and look at one billion of them, the differences fade away. Turns out, one billion drawings can remind us of how similar we are.
Take drawings people made of faces. Some have eyebrows.
Some have ears.
Some have hair.
Some are round.
Some are oval.
But when if you look at them all together and squint, you notice something interesting: Most people seem to draw faces that are smiling.
These sorts of interesting patterns emerge with lots of drawings. Like how people all over the world have trouble drawing bicycles.
With some exceptions from the rare bicycle-drawing experts.
If you overlay these drawings, you’ll also notice some interesting patterns based on geography. Like the directions that chairs might point:
Or the number of scoops you might get on an ice cream cone.
And the strategy you might use to draw a star.
Still, no matter the drawing method, over the last 12 months, people have drawn more stars in Quick, Draw! than there are actual stars visible to the naked eye in the night sky.
If there’s one thing one billion drawings has taught us, it’s that no matter who we are or where we’re from, we’re united by the fun of making silly drawings of the things around us.
Quick, Draw! began as a simple way to let anyone play with machine learning. But these billions of drawings are also a valuable resource for improving machine learning. Researchers at Google have used them to train models like sketch-rnn, which lets people draw with a neural network. And the data we gathered from the game powers tools like AutoDraw, which pairs machine learning with drawings from talented artists to help everyone create anything visual, fast.
There is so much we have yet to discover. To explore a subset of the billion drawings, visit our open dataset. To learn more about how Quick, Draw! was built, read this post. And to draw your own star (or ice cream cone, or bicycle), play a round of Quick, Draw!
When it comes to data in spreadsheets, deciphering meaningful insights can be a challenge whether you’re a spreadsheet guru or data analytics pro. But thanks to advances in the cloud and artificial intelligence, you can instantly uncover insights and empower everyone in your organization—not just those with technical or analytics backgrounds—to make more informed decisions.
We launched “Explore” in Sheets to help you decipher your data easily using the power of machine intelligence, and since then we’ve added even more ways for you to intelligently visualize and share your company data. Today, we’re announcing additional features to Google Sheets to help businesses make better use of their data, from pivot tables and formula suggestions powered by machine intelligence, to even more flexible ways to help you analyze your data.
Easier pivot tables, faster insights
Many teams rely on pivot tables to summarize massive data sets and find useful patterns, but creating them manually can be tricky. Now, if you have data organized in a spreadsheet, Sheets can intelligently suggest a pivot table for you.
In the Explore panel, you can also ask questions of your data using everyday language (via natural language processing) and have the answer returned as a pivot table. For example, type “what is the sum of revenue by salesperson?” or “how much revenue does each product category generate?” and Sheets can help you find the right pivot table analysis.
In addition, if you want to create a pivot table from scratch, Sheets can suggest a number of relevant tables in the pivot table editor to help you summarize your data faster.
Suggested formulas, quicker answers
We often use basic spreadsheet formulas like =SUM or =AVERAGE for data analysis, but it takes time to make sure all inputs are written correctly. Soon, you may notice suggestions pop up when you type “=” in a cell. Using machine intelligence, Sheets provides full formula suggestions to you based on contextual clues from your spreadsheet data. We designed this to help teams save time and get answers more intuitively.
Even more Sheets features
We’re also adding more features to make Sheets even better for data analysis:
- Check out a refreshed UI for pivot tables in Sheets, and new, customizable headings for rows and columns.
- View your data differently with new pivot table features. When you create a pivot table, you can “show values as a % of totals” to see summarized values as a fraction of grand totals. Once you have a table, you can right-click on a cell to “view details” or even combine pivot table groups to aggregate data the way you need it. We’re also adding new format options, like repeated row labels, to give you more fine-tuned control of how to present your summarized data.
- Create and edit waterfall charts. Waterfall charts are good for visualizing sequential changes in data, like if you want to see the incremental breakdown of last year’s revenue month-by-month. Select Insert > Chart > Chart type picker and then choose “waterfall.”
- Quickly import or paste fixed-width formatted data files. Sheets will automatically split up the data into columns for you without needing a delimiter, like commas, between data.
These new Sheets features will roll out in the coming weeks—see specific details here. To learn more about how G Suite can help your business uncover valuable insights and speed up efficiencies, visit the G Suite website. Or check out these tips to help you get started with Sheets.
Editor’s note: Companies are evaluating how to use artificial intelligence to transform how they work. Nicholas McQuire, analyst at CCS Insight, reflects on how businesses are using machine learning and assistive technologies to help employees be more productive. He also provides tangible takeaways on how enterprises can better prepare for the future of work.
Employees are drowning in a sea of data and sprawling digital tools, using an average of 6.1 mobile apps for work purposes today, according to a recent CCS Insight survey of IT decision-makers. Part of the reason we’ve seen a lag in macro productivity since the 2008 financial crisis is that we waste a lot of time doing mundane tasks, like searching for data, booking meetings and learning the ins and outs of complex software.
According to Harvard Business Review, wasted time and inefficient processes—what experts call “organizational drag”—cost the U.S. economy a staggering $3 trillion each year. Employees need more assistive and personalized technology to help them tackle organizational drag and work faster and smarter.
Over the next five years, artificial intelligence (AI) will change the way we work and, in the process, transform businesses.
The arrival of AI in the enterprise is quickening
I witnessed a number of proofs of concept in machine learning in 2017; many speech-and image-based cognitive applications are emerging in specific markets, like fraud detection in finance, low-level contract analysis in the legal sector and personalization in retail. There are also AI applications emerging in corporate functions such as IT support, human resources, sales and customer service.
This shows promise for the technology, particularly in the face of challenges like trust, complexity, security and training required for machine learning systems. But it also suggests that the arrival of AI in enterprises could be moving more quickly than we think.
According to the same study, 58 percent of respondents said they are either using, trialling or researching the technology in their business. Decision-makers also said that on average, 29 percent of their applications will be enhanced with AI within the next two years—a remarkably bullish view.
New opportunities for businesses to evolve productivity
In this context, new AI capabilities pose exciting opportunities to evolve productivity and collaboration.
- Assistive software: In the past year, assistive, cognitive features have become more prevalent in productivity software, such as search, quicker access to documents, automated email replies and virtual assistants. These solutions help surface contextually relevant information for employees and can automate simple, time-consuming tasks, like scheduling meetings, creating help desk tickets, booking conference rooms or summarizing content. In the future, they might also help firms improve and manage employee engagement, a critical human resources and leadership challenge at the moment.
- Natural language processing: It won’t be long before we also see the integration of voice or natural language processing in productivity apps. The rise of speech-controlled smart speakers such as Google Home, Amazon Echo or the recently-launched Alexa for Business show that creating and completing documents using speech dictation, or using natural language queries to parse data or control functions in spreadsheets, is no longer in the realm of science fiction.
- Security: Perhaps one of the biggest uses of AI will be to protect company information. Companies are beginning to use AI to protect against spam, phishing and malware in email, as well as the alarming rise of data breaches across the globe; the use of AI to detect threats and improve incident response will likely rise exponentially. Cloud security vendors with access to higher volumes of signals to train AI models are well placed to help businesses leverage early detection of threats. Perhaps this is why, IT professionals listed cybersecurity as the most-likely adopted use of AI in their organizations.
One thing to note: it’s important that enterprises gradually introduce their employees to machine learning capabilities in productivity apps as not to undermine the familiarity of the user experience or turn employees off in fear of privacy violations. In this respect, the advent of AI into work activities resembles consumer apps like YouTube, Maps, Spotify or Amazon, where the technology is subtle to users who may not be aware of cognitive features. The fact that 54 percent of employees in our survey stated they don’t use AI in their personal life, despite the widespread use AI these successful apps, is an important illustration.
How your company can prepare for change
Businesses of all shapes and sizes need to prepare for one of the most important technology shifts of our generation. For those who have yet to get started, here are a few things to consider:
- Introduce your employees to AI in collaboration tools early. New, assistive AI features in collaboration software help employees get familiar with the technology and its benefits. Smart email, improved document access and search, chatbots and speech assistants will all be important and accessible technologies that can save employees time, improve workflows and enhance employee experiences.
- Take advantage of tools that use AI for data security. Rising data breaches and insider threats, coupled with the growing use of cloud and mobile applications, means the integrity of company data is consistently at risk. Security products that incorporate machine learning-based threat intelligence and anomaly detection should be a key priority.
- Don’t neglect change management. New collaboration tools that use AI have a high impact on organizational culture, but not all employees will be immediately supportive of this new way of working. While our surveys reveal employees are generally positive on AI, there is still much fear and confusion surrounding AI as a source of job displacement. Be mindful of the impact of change management, specifically the importance of good communication, training and, above all, employee engagement throughout the process.
AI will no doubt face some challenges over the next few years as it enters the workplace, but sentiment is changing away from doom-and-gloom scenarios towards understanding how the technology can be used more effectively to assist humans and enable smarter work.
It will be fascinating to see how businesses and technology markets transform as AI matures in the coming years.
Suhani Vora is a bioengineer, aspiring (and self-taught) machine learning expert, SNES Super Mario World ninja, and Google AI Resident. This means that she’s part of a 12-month research training program designed to jumpstart a career in machine learning. Residents, who are paired with Google AI mentors to work on research projects according to their interests, apply machine learning to their expertise in various backgrounds—from computer science to epidemiology.
I caught up with Suhani to hear more about her work as an AI Resident, her typical day, and how AI can help transform the field of genomics.
Phing: How did you get into machine learning research?
Suhani: During graduate school, I worked on engineering CRISPR/Cas9 systems, which enable a wide range of research on genomes. And though I was working with the most efficient tools available for genome editing, I knew we could make progress even faster.
One important factor was our limited ability to predict what novel biological designs would work. Each design cycle, we were only using very small amounts of previously collected data and relied on individual interpretation of that data to make design decisions in the lab.
By failing to incorporate more powerful computational methods to make use of big data and aid in the design process, it was affecting our ability to make progress quickly. Knowing that machine learning methods would greatly accelerate the speed of scientific discovery, I decided to work on finding ways to apply machine learning to my own field of genetic engineering.
I reached out to researchers in the field, asking how best to get started. A Googler I knew suggested I take the machine learning course by Andrew Ng on Coursera (could not recommend it more highly), so I did that. I’ve never had more fun learning! I had also started auditing an ML course at MIT, and reading papers on deep learning applications to problems in genomics. Ultimately, I took the plunge and and ended up joining the Residency program after finishing grad school.
Tell us about your role at Google, and what you’re working on right now.
I’m a cross-disciplinary deep learning researcher—I research, code, and experiment with deep learning models to explore their applicability to problems in genomics.
In the same way that we use machine learning models to predict the objects are present in an image (think: searching for your dogs in Google Photos), I research ways we can build neural networks to automatically predict the properties of a DNA sequence. This has all kinds of applications, like predicting whether a DNA mutation will cause cancer, or is benign.
What’s a typical day like for you?
On any given day, I’m writing code to process new genomics data, or creating a neural network in TensorFlow to model the data. Right now, a lot of my time is spent troubleshooting such models.
I also spend time chatting with fellow Residents, or a member of the TensorFlow team, to get their expertise on the experiments or code I’m writing. This could include a meeting with my two mentors, Mark DePristo and Quoc Le, top researchers in the field of machine learning who regularly provide invaluable guidance for developing the neural network models I’m interested in.
What do you like most about the AI Residency program? About working at Google?
I like the freedom to pursue topics of our interest, combined with the strong support network we have to get things done. Google is a really positive work environment, and I feel set up to succeed. In a different environment I wouldn’t have the chance to work with a world-class researcher in computational genomics like Mark, AND Quoc, one of the world’s leading machine learning researchers, at time same time and place. It’s pretty mind-blowing.
What kind of background do you need to work in machine learning?
We have such a wide array of backgrounds among our AI Residents! The only real common thread I see is a very strong desire to work on machine learning, or to apply machine learning to a particular problem of choice. I think having a strong background in linear algebra, statistics, computer science, and perhaps modeling makes things easier—but these skills are also now accessible to almost anyone with an interest, through MOOCs!
What kinds of problems do you think that AI can help solve for the world?
Ultimately, it really just depends how creative we are in figuring out what AI can do for us. Current deep learning methods have become state of the art for image recognition tasks, such as automatically detecting pets or scenes in images, and natural language processing, like translating from Chinese to English. I’m excited to see the next wave of applications in areas such as speech recognition, robotic handling, and medicine.
Interested in the AI Residency? Check out submission details and apply for the 2018 program on our Careers site.
Editor’s note: TensorFlow, our open source machine learning library, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we’re sharing those stories here on Keyword. Here’s one of them.
Quill.org was founded by a group of educators and technologists to help students become better writers and critical thinkers. Before beginning development, they researched hundreds of studies on writing education and found a common theme—students had a hard time grasping the difference between a run-on sentence and a fragment. So the Quill team developed a tool to help students identify the different parts of a sentence, with a focus on real-time feedback.
Using the Quill tool, students complete a variety of exercises, including joining sentences, writing complex sentences, and explaining their use and understanding of grammar. The tool relies on a huge depository of sentence fragments, which Quill finds, recognizes and compiles using TensorFlow, Google’s open source machine learning library. TensorFlow technology is the backbone of the tool and can accurately detect if a student’s answers are correct. After completing the exercises, each student gets a customized explanation of incorrect responses, and the tool learns from each answer to create an individualized testing plan focused on areas of difficulty. Here’s an example of how it works:
More than 200,000 students—62 percent from low-income schools—have used Quill. They’ve collectively answered 20 million exercises, and Quill’s quick, personalized writing instruction has helped them master writing standards across the Common Core curriculum.
Teachers have also benefitted from introducing Quill in their classrooms. Each teacher has access to a customized portal, allowing them to see an individual student’s progress. Plus, by using machine learning, teachers have been spared hundreds of hours of manual grading. Laura, a teacher at Caswell Elementary School in California said, “Quill has been a wonderful tool for my third graders, many of whom are second language learners. We especially love the immediate feedback provided after each practice; it has definitely made us pay closer attention to detail.”
Quill’s most recent update is a “multiplayer” feature, allowing students to interact with each other in the tool. They can see their peers’ responses, which fosters spirited classroom discussions and collaboration, and helps students learn from each other.
While students aren’t using quills (or even pens!) anymore, strong writing skills are as important as ever. And with the help of machine learning, Quill makes it fun and engaging to develop those skills.
Online security is top of mind for everyone these days, and we’re more focused than ever on protecting you and your data on Google, in the cloud, on your devices, and across the web.
One of our biggest focuses is phishing, attacks that trick people into revealing personal information like their usernames and passwords. You may remember phishing scams as spammy emails from “princes” asking for money via wire-transfer. But things have changed a lot since then. Today’s attacks are often very targeted—this is called “spear-phishing”—more sophisticated, and may even seem to be from someone you know.
Even for savvy users, today’s phishing attacks can be hard to spot. That’s why we’ve invested in automated security systems that can analyze an internet’s-worth of phishing attacks, detect subtle clues to uncover them, and help us protect our users in Gmail, as well as in other Google products, and across the web.
Our investments have enables us to significantly decrease the volume of phishing emails that users and customers ever see. With our automated protections, account security (like security keys) and warnings, Gmail is the most secure email service today.
Here is a look at some of the systems that have helped us secure users over time, and enabled us to add brand new protections in the last year.
More data helps protect your data
The best protections against large-scale phishing operations are even larger-scale defenses. Safe Browsing and Gmail spam filters are effective because they have such broad visibility across the web. By automatically scanning billions of emails, webpages, and apps for threats, they enable us to see the clearest, most up-to-date picture of the phishing landscape.
We’ve trained our security systems to block known issues for years. But, new, sophisticated phishing emails may come from people’s actual contacts (yes, attackers are able to do this), or include familiar company logos or sign-in pages. Here’s one example:
Attacks like this can be really difficult for people to spot. But new insights from our automated defenses have enabled us to immediately detect, thwart and protect Gmail users from subtler threats like these as well.
Smarter protections for Gmail users, and beyond
Since the beginning of the year, we’ve added brand new protections that have reduced the volume of spam in people’s inboxes even further.
- We’ve built new systems that detect suspicious email attachments and submit them for further inspection by Safe Browsing. This protects all Gmail users, including G Suite customers, from malware that may be hidden in attachments.
- We’ve also updated our machine learning models to specifically identify pages that look like common log-in pages and messages that contain spear-phishing signals.
Safe Browsing helps protect more than 3 billion devices from phishing, across Google and beyond. It hunts and flags malicious extensions in the Chrome Web Store, helps block malicious ads, helps power Google Play Protect, and more. And of course, Safe Browsing continues to show millions of red warnings about websites it considers dangerous or insecure in multiple browsers—Chrome, Firefox, Safari—and across many different platforms, including iOS and Android.
Layers of phishing protection
Phishing is a complex problem, and there isn’t a single, silver-bullet solution. That’s why we’ve provided additional protections for users for many years.
- Since 2012, we’ve warned our users if their accounts are being targeted by government-backed attackers. We send thousands of these warnings each year, and we’ve continued to improve them so they are helpful to people. The warnings look like this.
- This summer, we began to warn people before they linked their Google account to an unverified third-party app.
- We first offered two-step verification in 2011, and later strengthened it in 2014 with Security Key, the most secure version of this type of protection. These features add extra protection to your account because attackers need more than just your username and password to sign in.
We’ll never stop working to keep your account secure with industry-leading protections. More are coming soon, so stay tuned.
The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML), so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.
HDR+ produces beautiful images, and we’ve evolved the algorithm that powers it over the past year to use the Pixel 2’s application processor efficiently, and enable you to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel, we’ve also been working on creating hardware capabilities that enable significantly greater computing power—beyond existing hardware—to bring HDR+ to third-party photography applications.
To expand the reach of HDR+, handle the most challenging imaging and ML applications, and deliver lower-latency and even more power-efficient HDR+ processing, we’ve created Pixel Visual Core.
Pixel Visual Core is Google’s first custom-designed co-processor for consumer products. It’s built into every Pixel 2, and in the coming months, we’ll turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.
Let’s delve into the details for you technical folks out there: The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.
In the coming weeks, we’ll enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we’ll enable it for all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography that you already get through your Pixel 2 camera become available in your favorite photography apps.
HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!
Do you know what the greatest Olympian of all time and Stealthwatch have in common? Both work harder and smarter for unbeatable performance. I recently heard from the one-and-only, Michael Phelps. He said that very early on, he and his coach set very high goals. And he knew that to achieve them, he had to […]
Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).
These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.
Hardware, built from the inside out
We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.
You’ll see this reflected in our 2017 lineup of new Made by Google products:
- The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
- Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
- With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
- Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
- The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
- Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.
Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.
Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.
Editor’s Note: The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.
(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)
Fei-Fei: Are you getting excited for Grace Hopper?
Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.
Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?
Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.
It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.
Fei-Fei: So do you think it’s too late?
Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.
Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?
Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.
I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come.
Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?
Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.
Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.
Diane: This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.
Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.
…young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families.
Dr. Fei-Fei Li
Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?
Fei-Fei: Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.
Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.
Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.
Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.
Diane: Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.
To hear more from Fei-Fei and Diane, tune into Grace Hopper’s live stream on October 4.
We all get sidetracked at work. We intend to be as efficient as possible, but inevitably, the “busyness” of business gets in the way through back-to-back meetings, unfinished docs or managing a rowdy inbox. To be more efficient, you need quick access to your information like relevant docs, important tasks and context for your meetings.
Sadly, according to a report by McKinsey, workers spend up to 20 percent of their time—an entire day each week—searching for and consolidating information across a number of tools. We made Google Cloud Search available to Enterprise and Business edition customers earlier this year so that teams can access important information quicker. Here are a few ways that Cloud Search can help you get the information you need to accomplish more throughout your day.
1. Search more intuitively, access information quicker
If you search for a doc, you’re probably not going to remember its exact name or where you saved it in Drive. Instead, you might remember who sent the doc to you or a specific piece of information it contains, like a statistic.
A few weeks ago, we launched a new, more intuitive way to search in Cloud Search using natural language processing (NLP) technology. Type questions in Cloud Search using everyday language, like “Documents shared with me by John?,” “What’s my agenda next Tuesday?,” or “What docs need my attention?” and it will track down useful information for you.
2. Prioritize your to-dos, use spare time more wisely
With so much work to do, deciding what to focus on and what to leave for later isn’t always simple. A study by McKinsey reports that only nine percent of executives surveyed feel “very satisfied” with the way they allocate their time. We think technology, like Cloud Search, should help you with more than just finding what you’re looking for—it should help you stay focused on what’s important.
Imagine if your next meeting gets cancelled and you suddenly have an extra half hour to accomplish tasks. You can open the Cloud Search app to help you focus on what’s important. Powered by machine intelligence, Cloud Search proactively surfaces information that it believes is relevant to you and organizes it into simple cards that appear in the app throughout your workday. For example, it suggests documents or tasks based on which documents need your attention or upcoming meetings you have in Google Calendar.
3. Prepare for meetings, get more out of them
Employees spend a lot of time in meetings. According to a study in the UK by the Centre for Economics and Business, office workers spend an average of four hours per week in meetings. It’s even normal for us to join meetings unprepared. The same group surveyed feels like nearly half of the time (47%) spent in meetings is unproductive.
Thankfully, Cloud Search can help. It uses machine intelligence to organize and present information to set you up for success in a meeting. In addition to surfacing relevant docs, Cloud Search also surfaces information about meeting attendees from your corporate directory, and even includes links to relevant conversations from Gmail.
Start by going into Cloud Search to see info related to your next meeting. If you’re interested in looking at another meeting later in the day, just click on “Today’s meetings” and it will show you your agenda for the day. Next, select an event in your agenda (sourced from your Calendar) and Cloud Search will recommend information that’s relevant to that meeting.