Today, we announced an agreement to acquire GitHub, the world’s leading software development platform. I want to share what this acquisition will mean for our industry and for developers. The era of the intelligent cloud and intelligent edge is upon us. Computing is becoming embedded in the world, with every part of our daily life…
That’s a wrap! After a bustling three days at Google I/O, we have a lot to look back on and a lot to look forward to, from helpful features made possible with AI to updates that help you develop a sense of digital wellbeing. Here are 100 of our Google I/O announcements, in no particular order—because we don’t play favorites. 💯
1. Hey Google, you sound great today! You can now choose from six new voices for your Google Assistant.
2. There will even be some familiar voices later this year, with John Legend lending his melodic tones to the Assistant.
3. The Assistant is becoming more conversational. With AI and WaveNet technology, we can better mimic the subtleties of the human voice—the pitch, pace and, um, the pauses.
4. Continued Conversation lets you have a natural back-and-forth conversation without repeating “Hey Google” for each follow-up request. And the Google Assistant will be able to understand when you’re talking to it versus someone else, and respond accordingly.
5. We’re rolling out Multiple Actions so the Google Assistant can understand more complex queries like: “What’s the weather like in New York and in Austin?”
6. Custom Routines allow you to create your own Routine, and start it with a phrase that feels best for you. For example, you can create a Custom Routine for family dinner, and kick it off by saying “Hey Google, dinner’s ready” and the Assistant can turn on your favorite music, turn off the TV, and broadcast “dinner time!” to everyone in the house.
7. Soon you’ll be able to schedule Routines for a specific day or time using the Assistant app or through the Google Clock app for Android.
8. Families have listened to over 130,000 hours of children’s stories on the Assistant in the last two months alone.
9. Later this year we’ll introduce Pretty Please so the Assistant can understand and encourage polite conversation from your little ones.
10. Smart Display devices will be available this summer, bringing the simplicity of voice and the Google Assistant together with a rich visual experience.
11. We redesigned the Assistant experience on the phone. The Assistant will give you a quick snapshot of your day, with suggestions based on the time of day, location and recent interactions with the Assistant.
12. Bon appetit! A new food pick-up and delivery experience for the Google Assistant app will be available later this year.
13. Keep your eyes on the road—the Assistant is coming to navigation in Google Maps with a low visual profile. You can keep your hands on the wheel while sending text messages, playing music and more.
14.Google Duplex is a new capability we will be testing this summer within the Google Assistant to you help you make reservations, schedule appointments, and get holiday hours from businesses. Just provide the date and time, and your Assistant will call the business to coordinate for you.
15.The Google Assistant will be available in 80 countries by the end of the year.
16. We’re also bringing Google Home and Google Home Mini to seven more countries later this year: Spain, Mexico, Korea, the Netherlands, Denmark, Norway and Sweden.
17.Soon you’ll see Smart Compose in Gmail, a new feature powered by AI, that helps you save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors in your emails.
18. ML Kit brings the breadth of Google’s machine learning technology to app developers, including on-device APIs for text recognition, face detection, image labeling and more. It’s available in one mobile SDK, accessible through Firebase, and works on both Android and iOS.
19.Our third-generation TPUs (Tensor Processing Units) are liquid-cooled and much more powerful than the previous generation, allowing us to train and run models faster so more products can be enhanced with AI.
20. We published results in a Nature Research journal showing that our AI model can predict medical events, helping doctors spot problems before they happen.
21. AI is making it easier for Waymo’s vehicles to drive in different environments, whether it’s the snowy streets of Michigan, foggy hills of San Francisco or rainy roads of Kirkland. With these improvements, we’re moving closer to our goal of bringing self-driving technology to everyone, everywhere.
22.We unveiled a beta version of Android P, focused on intelligence, simplicity and digital wellbeing.
23. We partnered with DeepMind to build Adaptive Battery, which prioritizes battery power for the apps and services you use most.
24. Adaptive Brightness in Android P learns how you like to set the brightness based on your surroundings, and automatically updates it to conserve energy.
25. App Actions help you get to your next task quickly by predicting what action you’ll take next. So if you connect your headphones to your device, Android will suggest an action to resume your favorite Spotify playlist.
26. Actions will also show up throughout your Android phone in places like the Launcher, Smart Text Selection, the Play Store, the Google Search app and the Assistant.
27. Slices makes your smartphone even smarter by showing parts of apps right when you need them most. Say for example you search for “Lyft” in Google Search on your phone—you can see an interactive Slice that gives you the price and time for a trip to work, and you can quickly order the ride.
28. A new enterprise work profile visually separates your work apps. Tap on the work tab to see work apps all in one place, and turn them off with a simple toggle when you get off work.
29. Less is more! Swipe up on the home button in Android P to see a newly designed Overview, with full-screen previews of recently used apps. Simply tap once to jump back into any app.
30. If you’re constantly switching between apps, we’ve got good news for you. Smart Text Selection (which recognizes the meaning of the text you’re selecting and suggests relevant actions) now works in Overview, making it easier to perform the action you want.
31.Android P also brings a redesigned Quick Settings, a better way to take and edit screenshots (say goodbye to the vulcan grip that was required before), simplified volume controls, an easier way to manage notifications and more.
32. Technology should help you with your life, not distract you from it. Android P comes with digital wellbeing features built into the platform.
33.Dashboard gives you a snapshot on how you’re spending time on your phone. It includes information about how long you’ve spent in apps, how many times you unlocked your phone and how many notifications you’ve received.
34.You can take more control over how you engage with your phone. App Timer lets you set time limits on apps, and when you get close to your time limit Android will nudge you that it is time to do something else.
35. Do Not Disturb (DND) mode has more oomph. Not only does it silence phone calls and texts, but it also hides visual disruptions like notifications that pop up on your display.
36. We created a gesture to help you focus on being present: If you turn your phone over on the table, it automatically enters DND.
37. With a new API, you can automatically set your status on messaging apps to “away” when DND is turned on.
38.Fall asleep a little easier with Wind Down. Set a bedtime and your phone will automatically switch to Night Light mode and fade to grayscale to eliminate distractions.
39.Android P is packed with security and privacy improvements updated security protocols, encrypted backups, protected confirmations and more.
40.Thanks to work on Project Treble, an effort we introduced last year to make OS upgrades easier for partners, Android P Beta is available on partner devices including Sony Xperia XZ2, Xiaomi Mi Mix 2S, Nokia 7 Plus, Oppo R15 Pro, Vivo X21, OnePlus 6, and Essential PH‑1, in addition to Pixel and Pixel 2.
41. Say hello to the JBL LINK BAR. We worked with Harman to launch this hybrid device that delivers a full Google Assistant speaker and Android TV experience.
42. We released a limited edition Android TV dongle device, the ADT-2, for developers to create more with Android TV.
43. Android Auto is now working with more than 50 OEMs to support more than 400 cars and aftermarket stereos.
44. Volvo’s next-gen infotainment system powered by Android will integrate with Google apps, including Maps, Assistant and Play Store.
45. Watch out! You can get more done from your watch with new features from the Google Assistant on Wear OS by Google.
46. Smart suggestions from the Google Assistant on Wear OS by Google let you continue conversations directly from your watch. Choose from contextually relevant follow-up questions or responses.
47. Now you can choose to hear answers from your watch speaker or Bluetooth headphones. Just ask Google Assistant on your watch “tell me about my day.”
48. Actions will be available on all Wear OS by Google watches, so you can use your voice to do tasks like preheat your LG oven while you’re unloading your groceries or ask Bay Trains when the next train is leaving. And we’re working with developers and partners to add more Actions and functionalities.
49. We’ve mapped more than 21 million miles across 220 countries, put hundreds of millions of businesses on the map, and provided access to more than 1 billion people around the world.
50.Google Maps is becoming more assistive and personal. A redesigned Explore tab features everything you need to know about dining, events and activity options in whatever area you’re interested in.
51. Top lists give you information from local experts, Google’s algorithms and trusted publishers so you can see everything that’s new and interesting—like the most essential brunches or cheap eats nearby.
52. New features help you easily make plans as a group. You can create a shortlist of places within the app and share it with friends across any platform, so you can quickly vote and decide on a place to go.
53. Your “match” helps you see the world through your lens, suggesting how likely you are to enjoy a food or drink spot based on your preferences.
54. Updated walking directions help you get oriented on your walking journey more quickly and navigate the world on foot with more confidence. So when you emerge out of a subway or reach a crossing with more than four streets, you’ll know which way to go.
55. Suggested actions, powered by machine learning, will start to show up on your photos right as you view them—giving you the option to brighten, share, rotate or archive a picture. Another action on the horizon is the ability to quickly export photos of documents into PDFs.
56. New color pop creations leave the subject of your photo in color while setting the background to black and white.
57. We’re also working on the ability for you to change black-and-white photos into color in just a tap.
58. We announced the Google Photos partner program, giving developers the tools to build smarter, faster and more helpful photo and video experiences in their products, so you can interact with your photos across more apps and devices.
59. The updatedGoogle News uses a new set of AI techniques to find and organize quality reporting and diverse information from around the web, in real time, and organize it into storylines so you can make sense of what’s happening from the world stage to your own backyard.
60. The “For You” tab makes it easy to keep up to date on what you care about, starting with a “Daily Briefing” of five stories that Google has organized for you—a mix of the most important headlines, local news and the latest on your interests.
61.With Full Coverage, you can deep dive on a story with one click. This section is not personalized—everyone will see the same content including related articles, timelines, opinion and analysis pieces, video, timeline and the ability to see what the impact or reaction has been in real time.
62. The separate Headlines section, also unpersonalized, lets you stay fully informed across a broad spectrum of news, like world news, business, science, sports, entertainment and more.
63. Subscribing to your favorite publishers right in the Google News app is super simple using Subscribe with Google—no forms, new passwords or credit cards—and you can access your subscriptions anywhere you’re logged in across Google and the web.
64. Updates to Google Lens help you get answers to the world around you. With smart text, you can copy and paste text from the real world—like recipes or business cards—to your phone.
65. With style match, if an outfit or a home decor item catches your eye, you can open Lens and not only get info on that specific item (like reviews), but also see similar items.
66.Lens now uses real-time identification so you’ll be able to browse the world around you just by pointing your camera. It’s able to give you information quickly and anchor it to the things you see.
67. Use Lens directly in the camera app on supported devices from the following OEMs: LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus—and of course the Google Pixel.
68. Lens is coming to more languages, including French, Italian, German, Spanish and Portuguese.
69. Tour Creator lets anyone with a story to tell, like teachers or students, easily make a VR tour using imagery from Google Street View or their own 360 photos.
70.With Sceneform, Java developers can now build immersive, 3D apps without having to learn complicated APIs. They can use it to build AR apps from scratch as well as add AR features to existing ones.
71. We’ve rolled out ARCore’s Cloud Anchor API across Android and iOS to help developers build more collaborative and immersive augmented reality apps. Cloud Anchors makes it possible to create collaborative AR experiences, like redecorating your home, playing games and painting a community mural—all together with your friends.
72. ARCore now features Vertical Plane Detection which means you can place AR objects on more surfaces, like textured walls. Now you can do things like view artwork above your mantlepiece before buying it.
73. Thanks to a capability called Augmented Images, you’ll be able to bring images to life just by pointing your phone at them—this works on QR codes, AR markers and static image targets (like maps, products in a store, logos, photos or movie posters).
74. We launched updates to the YouTube mobile app that will help everyone develop their own sense of digital wellbeing. The Take a Break reminder lets you set a reminder to (you guessed it!) take a break while watching videos after a specified amount of time.
75. You can schedule specific times each day to silence notification sounds and vibrations that are you sent to your phone from the YouTube app.
76. You can also opt in to a scheduled notification digest that combines all of the daily push notifications from the YouTube app into a single, combined notification.
77. Soon you’ll have access to a time watched profile to give you a better understanding of the time you spend on YouTube.
78. Lookout, a new Android app, gives people who are blind or visually impaired auditory cues as they encounter objects, text and people around them.
79. We’re introducing the ability to type in Morse code in Gboard beta for Android. We partnered with developer Tania Finlayson, an expert in Morse code assistive technology, to build this feature.
80. After launching in beta at Game Developers Conference, Google Play Instant is now open to all game developers.
81.Updated Google Play Console features help you improve your app’s performance and grow your business. These include improvements to the dashboard statistics, Android vitals, pre-launch report, acquisition report and subscriptions dashboard.
82. Android Jetpack is a new set of components, tools and architectural guidance that makes it quicker and easier for developers to build great Android apps.
83. Android KTX, launching as part of Android Jetpack, optimizes the Kotlin developer experience.
84. Android App Bundle, a new format for publishing Android apps, helps developers deliver great experiences in smaller app sizes and optimize apps for the wide variety of Android devices and form factors available.
85.The latest canary release of Android Studio 3.2 focuses on supporting the Android P Developer Preview, Android App Bundle and Android Jetpack, plus more features to help you develop fast and easily.
86.We added Dynamic Delivery so your users download only the code and resources they need to run your app, reducing download times and saving space on their devices.
87.With Android Things 1.0, developers can build and ship commercial IoT products using the Android Things platform.
88.The latest improvements to Performance Monitoring on Firebase help you easily monitor app performance issues and identify the parts of your app that stutter or freeze.
89. In the coming months, we’re expanding Firebase Test Lab to include iOS to help get your app into a high-quality state—across both Android and iOS—before you even release it.
90. We shipped Flutter Beta 3, the latest version of our mobile app SDK for creating high-quality, native user experiences on iOS and Android..
91. We launched an early preview of the Android extension libraries (AndroidX) which represents a new era for the Support Library.
92. You can now run Linux apps on your Chromebooks (starting with a preview on the Google Pixelbook), so you can use your favorite tools and familiar commands with the speed, simplicity and security of Chrome OS.
93. Material Theming, part of the latest update to Material Design, lets developers systematically express a unique style across their product more consistently, so they don’t have to choose between building beautiful and building fast. We also redesigned Material.io.
94. We introduced three Material tools to streamline workflow and address common pain points across design and development: Material Theme Editor, a control panel that lets you apply global style changes to components across your design; Gallery, a platform for sharing, reviewing and commenting on design iterations; and Material Icons in five different themes.
95. With open-source Material Components, you can customize key aspects of an app’s design, including color, shape, and type themes.
96. We’ll launch a beta that allows developers to display relevant content from their apps—such as a product catalog for a shopping app—within ads, giving users more helpful information before they download an app.
97. We started early testing to make Google Play Instant compatible with AdWords, so game developers can use Universal App campaigns to reach potential users and let them try out games directly from ads.
98. Developers using ads to grow their user bases will soon have a more complete picture with view through conversion (VTC) reporting, providing more insight into ad impressions and conversions.
99. With rewarded reporting to AdMob, developers can understand and fine-tune the performance of their rewarded ads–ads that let users opt in to view ads in exchange for in-app incentives or digital goods, such as an extra life in a game or 15 minutes of ad-free music streaming.
100. Developers who sell ad placements in their app can now more easily report data back to advertisers with the integration of IAB Tech Lab’s Open Measurement SDK.
Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.
The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.
Pushing the boundaries of AI to solve real-world problems
There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.
Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it’s not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.
Saving time across Gmail, Photos, and the Google Assistant
AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.
One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.
Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.
We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.
Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.
Understanding the world so we can help you navigate yours
AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you … to the concert poster you passed … to that lamp you liked in the store window.
Bringing you the top news from top sources
We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.
Helping you focus on what matters
Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We’re also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.
Posted by Kacey Fahey, Developer Marketing, Google Play
Based in Seattle, Big Fish Games was founded in 2002. Starting as a game studio, they quickly turned into a major publisher and distributor of casual games. Leading up to the launch of their hit time management game, Cooking Craze, the team ran an open beta on Google Play.
Big Fish Games found that using open beta provided more than 10x the amount of user feedback from around the world, and also gave them access to key metrics and Android Vitals in the Play Console. The ability to monitor game performance metrics pre-launch allowed the team to focus on areas of improvement, which lead to a 21% reduction in crash rate. The larger sample size of beta testers also provided more insights on player behavior and helped achieve a +7% improvement in day 1, day 7, and day 30 retention rates.
How useful did you find this blogpost?
Together, Azure and PlayFab will further unlock the power of the intelligent cloud for the gaming industry, enabling game developers and delighting gamers around the world.
The post Microsoft acquires PlayFab, accelerating game development innovation in the cloud appeared first on The Official Microsoft Blog.
Posted by Wojtek Kalicinski, Android Developer Advocate, Akshay Kannan,
Product Manager for Android Authentication, and Felipe Leme, Software Engineer on Android Frameworks
Starting in Oreo, Autofill makes it easy for users to provide credit cards,
logins, addresses, and other information to apps. Forms in your apps can now be
filled automatically, and your users no longer have to remember complicated
passwords or type the same bits of information more than once.
Users can choose from multiple Autofill services (similar to keyboards today).
By default, we include Autofill with Google, but users can also select any third
party Autofill app of their choice. Users can manage this from
What’s available today
Today, Autofill with Google supports filing credit cards, addresses, logins,
names, and phone numbers. When logging in or creating an account for the first
time, Autofill also allows users to save the new credentials to their account.
If you use WebViews in your app, which many apps do for logins and other
screens, your users can now also benefit from Autofill support, as long as they
have Chrome 61 or later installed.
The Autofill API is open for any developer to implement a service. We are actively
working with 1Password,
to help them with their implementations and will be working with other password managers shortly.
We are also creating a new curated collection on the Play Store, which the “Add service” button in Settings will link to. If you
are a password manager developer and would like us to review your app, please get
What you need to do as a developer
As an app developer, there are a few simple things you can do to take advantage
of this new functionality and make sure that it works in your apps:
Test your app and annotate your views if needed
In many cases, Autofill may work in your app without any effort. But to ensure
consistent behavior, we recommend providing explicit hints to tell the framework
about the contents of your field. You can do this using either the android:autofillHints
attribute or the setAutofillHints()
Similarly, with WebViews in your apps, you can use HTML Autocomplete
Attributes to provide hints about fields. Autofill will work in WebViews as
long as you have Chrome 61 or later installed on your device. Even if your app
is using custom views, you can also define
the metadata that allows autofill to work.
For views where Autofill does not make sense, such as a Captcha or a message
compose box, you can explicitly mark the view as IMPORTANT_FOR_AUTOFILL_NO
in the root of a view hierarchy). Use this field responsibly, and remember that
users can always bypass this by long pressing an EditText and selecting
“Autofill” in the overflow menu.
Affiliate your website and mobile app
Autofill with Google can seamlessly share logins across websites and mobile apps
‒ passwords saved through Chrome can also be provided to native apps. But in
order for this to work, as an app developer, you must explicitly declare the
association between your website with your mobile app. This involves 2 steps:
Step 1: Host a JSON file at
If you’ve used technologies like App Links or Google Smart Lock before, you
might have heard about the Digital Asset Links (DAL) file. It’s a JSON file
placed under a well known location in your website that lets you make public,
verifiable statements about other apps or websites.
You should follow the Smart
Lock for Passwords guide for information about how to create and host the
DAL file correctly on your server. Even though Smart Lock is a more advanced way
of signing users into your app, our Autofill service uses the same
infrastructure to verify app-website associations. What’s more, because DAL
files are public, third-party Autofill service developers can also use the
association information to secure their implementations.
Step 2: Update your App’s Manifest with the same
Once again, follow the Smart
Lock for Passwords guide to do this, under “Declare the association in the
You’ll need to update your app’s manifest file with an asset_statements
resource, which links to the URL where your assetlinks.json file is hosted. Once
that’s done, you’ll need to submit your updated app to the Play Store, and fill
out the Affiliation
Submission Form for the association to go live.
When using Android Studio 3.0, the App Links Assistant can generate all of this
for you. When you open the DAL generator tool (Tools -> App Links Assistant ->
Open Digital Asset Links File Generator), simply make sure you enable the new
checkbox labeled “Support sharing credentials between the app and website”.
Then, click on “Generate Digital Asset Links file”, and copy the preview content
to the DAL file hosted on your server and in your app. Please remember to verify
that the selected domain names and certificates are correct.
It’s still very early days for Autofill in Android. We are continuing to make
some major investments going forward to improve the experience, whether you use
Autofill with Google or a third party password manager.
Some of our key areas of investment include:
- Autofill with Google: We want to provide a great experience
out of the box, so we include Autofill with Google with all Oreo devices. We’re
constantly improving our field detection and data quality, as well as expanding
our support for saving more types of data.
- WebView support: We introduced initial support for filling
WebViews in Chrome 61, and we’ll be continuing to test, harden, and make
improvements to this integration over time, so if your app uses WebViews you’ll
still be able to benefit from this functionality.
- Third party app support: We are working with the ecosystem
to make sure that apps work as intended with the Autofill framework. We urge you
as developers to give your app a spin on Android Oreo and make sure that things
work as expected with Autofill enabled. For more info, see our full
documentation on the Autofill
If you encounter any issues or have any suggestions for how we can make this
better for you, please send
Today we’re kicking off Connect(); 2017, one of my favorite annual Microsoft developer events, where over three days we get to host approximately 150 livestreamed and interactive sessions for developers everywhere — no matter the tools they use or the platforms they prefer. Today at Connect(); 2017 I’m excited to share news that will help…
The post Developing for the intelligent cloud and intelligent edge at Microsoft Connect(); 2017 appeared first on The Official Microsoft Blog.
The ability to see something on a different scale often offers a new perspective. Launched with the Galaxy S8 and S8+ and now available with the Galaxy Note8,
My team at Microsoft has a guiding principle as the foundation of everything we do: Today’s students are tomorrow’s developers. As the head of cloud growth and ecosystems at Microsoft, working with students is one of the most fulfilling parts of my job because it gives me a lens into the future through the next…
From helping you find your favorite dog photos, to helping farmers in Japan sort cucumbers, machine learning is changing the way people use code to solve problems. But how does machine learning actually work? We wanted to make it easier for people who are curious about this technology to learn more about it. So we created Teachable Machine, a simple experiment that lets you teach a machine using your camera—live in the browser, no coding required.
Teachable Machine is built with a new library called deeplearn.js, which makes it easier for any web developer to get into machine learning. It trains a neural net right in your browser—locally on your device—without sending any images to a server. We’ve also open sourced the code to help inspire others to make new experiments.
Check it out at g.co/teachablemachine.
Posted by Dan Albert, Android NDK Tech Lead
The latest version of the Android Native Development Kit (NDK), Android NDK r16
Beta 1, is now available for download. It
is also available in the SDK manager via Android Studio.
NDK r16 is a big mileston…