Google announces intent to acquire Velostrata

Today, Google is excited to announce that it has entered into an agreement to acquire Israel-based Velostrata, a leader in enterprise cloud migration technology.

As more and more enterprises move to the cloud, many need a simple way to migrate from on-premises and adopt the cloud at their own pace. This helps them take advantage of what the cloud has to offer—speed, scalability, and access to technologies like advanced data analytics and machine learning.

With Velostrata, Google Cloud customers obtain two important benefits: they’ll be able to adapt their workloads on-the-fly for cloud execution, and they can decouple their compute from storage without performance degradation. This means they can easily and quickly migrate virtual machine-based workloads like large databases, enterprise applications, DevOps, and large batch processing to and from the cloud. On top of that, customers can control and automate where their data lives at all times—either on-premises or in the cloud—in as little as a few clicks.  

This acquisition, subject to closing conditions, will add to our broad portfolio of migration tools to support enterprises in their journey to the cloud. That way, businesses can simplify their onboarding process to Google Cloud Platform, and easily migrate workloads to Google Compute Engine.

We’re excited about the talented team that will be joining us in our Tel Aviv office, and the technical strength they bring to Google Cloud. For more information, you can read Velostrata’s blog post by co-founder and CEO Issy Ben-Shaul.

We look forward to sharing more details after close—stay tuned!

How TensorFlow is powering technology around the world

Editor’s Note: AI is behind many of Google’s products and is a big priority for us as a company (as you may have heard at Google I/O yesterday). So we’re sharing highlights on how AI already affects your life in ways you might not know, and how people from all over the world have used AI to build their own technology.

Machine learning is at the core of many of Google’s own products, but TensorFlow—our open source machine learning framework—has also been an essential component of the work of scientists, researchers and even high school students around the world. At Google I/O, we’re hearing from some of these people, who are solving big (we mean, big) problems—the origin of the universe, that sort of stuff. Here are some of the interesting ways they’re using TensorFlow to aid their work.

Ari Silburt, a Ph.D. student at Penn State University, wants to uncover the origins of our solar system. In order to do this, he has to map craters in the solar system, which helps him figure out where matter has existed in various places (and at various times) in the solar system. You with us? Historically, this process has been done by hand and is both time consuming and subjective, but Ari and his team turned to TensorFlow to automate it. They’ve trained the machine learning model using existing photos of the moon, and have identified more than 6,000 new craters.

pasted image 0 (16).png

On the left is a picture of the moon, hard to tell where the heck those craters are. On the right we have an accurate depiction of crater distribution thanks to TensorFlow.

Switching from outer space to the rainforests of Brazil: Topher White (founder of Rainforest Connection) invented “The Guardian” device to prevent illegal deforestation in the Amazon. The devices—which are upcycled cell phones running on Tensorflow—are installed in trees throughout the forest, recognize the sound of chainsaws and logging trucks, and alert the rangers who police the area. Without these devices, the land must be policed by people, which is nearly impossible given the massive area it covers.

pasted image 0 (17).png

Topher installs guardian devices in the tall trees of the Amazon

Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. In 2016, we announced that machine learning was being used to aid diagnostic efforts in the area of DR, by analyzing a patient’s fundus image (photo of the back of the eye) with higher accuracy. Now we’re taking those fundus images to the next level with TensorFlow. Dr. Jorge Cuadros, an optometrist in Oakland, CA, is able to determine a patient’s risk of cardiovascular disease by analyzing their fundus image with a deep learning model.

pasted image 0 (18).png

Fundus image of an eye with sight-threatening retinal disease. With machine learning this image will tell doctors much more than eye health.

Good news for green thumbs of the world, Shaza Mehdi and Nile Ravenell are high school students who developed PlantMD, an app that lets you figure out if your plant is diseased. The machine learning model runs on TensorFlow, and Shaza and Nile used data from plantvillage.com and a few university databases to train the model to recognize diseased plants. Shaza also built another app that uses a similar approach to diagnose skin disease.

Shaza developed PlantMD, an app that recognizes diseased plants

Shaza’s story

To learn more about how AI can bring benefits to everyone, check out ai.google.

Now students can create their own VR tours

Editor’s note: For Teacher Appreciation Week, we’re highlighting a few ways Google is supporting teachers—including Tour Creator, which we launched today to help schools create their own VR tours. Follow along on Twitter throughout the week to see more on how we’re celebrating Teacher Appreciation Week.

Since 2015, Google Expeditions has brought more than 3 million students to places like the Burj Khalifa, Antarctica, and Machu Picchu with virtual reality (VR) and augmented reality (AR). Both teachers and students have told us that they’d love to have a way to also share their own experiences in VR. As Jen Zurawski, an educator with Wisconsin’s West De Pere School District, put it: “With Expeditions, our students had access to a wide range of tours outside our geographical area, but we wanted to create tours here in our own community.”  

That’s why we’re introducing Tour Creator, which enables students, teachers, and anyone with a story to tell, to make a VR tour using imagery from Google Street View or their own 360 photos. The tool is designed to let you produce professional-level VR content without a steep learning curve. “The technology gets out of the way and enables students to focus on crafting fantastic visual stories,” explains Charlie Reisinger, a school Technology Director in Pennsylvania.

Once you’ve created your tour, you can publish it to Poly, Google’s library of 3D content. From Poly, it’s  easy to view. All you need to do is open the link in your browser or view in Google Cardboard. You can also embed it on your school’s website for more people to enjoy. Plus, later this year, we’ll add the ability to import these tours into the Expeditions application.

Tour Creator- Show people your world

Here’s how a school in Lancaster, PA is using Tour Creator to show why they love where they live.

“Being able to work with Tour Creator has been an awesome experience,” said Jennifer Newton, a school media coordinator in Georgia. “It has allowed our students from a small town in Georgia to tell our story to the world.”

To build your first tour, visit g.co/tourcreator. Get started by showing us what makes your community special and why you #LoveWhereYouLive!

Lookout: an app to help blind and visually impaired people learn about their surroundings

There are over 253 million blind or visually impaired people in the world. To make the world more accessible to them, we need to build tools that can work with the ever-changing environment around us. Our new Android app Lookout, coming to the Play Store in the U.S this year, helps people who are blind or visually impaired become more independent by giving auditory cues as they encounter objects, text and people around them.

We recommend wearing your Pixel device in a lanyard around your neck, or in your shirt pocket, with the camera pointing away from your body. After opening the app, and selecting a mode, Lookout processes items of importance in your environment and shares information it believes to be relevant—text from a recipe book, or the location of a bathroom, an exit sign, a chair or a person nearby. Lookout delivers spoken notifications, designed to be used with minimal interaction allowing people to stay engaged with their activity.

Office

There are four modes to choose from within the app: Home, Work & Play, Scan or Experimental (this allows you to test out features we’re working on). When you select a specific mode, Lookout will deliver information that’s relevant to the selected activity. If you’re getting ready to do your daily chores you’d select “Home” and you’ll hear notifications that tell you where the couch, table or dishwasher is. It gives you an idea of where those objects are in relation to you, for example “couch 3 o’clock” means the couch is on your right. If you select “Work & Play” when heading into the office, it may tell you when you’re next to an elevator, or stairwell.  As more people use the app, Lookout will use machine learning to learn what people are interested in hearing about, and will deliver these results more often.

Screenshot

Screenshot image of “Modes” available in Lookout including, “Work and Play,” “Home,” “Scan,” and “Experimental.” Second screenshot of a “Live” image of Lavender plants.

The core experience is processed on the device, which means the app can be used without an internet connection. Accessibility will be an ongoing priority for us, and Lookout is one step in helping blind or visually impaired people gain more independence by understanding their physical surroundings.

Google I/O 2018: What’s new in Android


Posted By Stephanie Cuthbertson, Product Management Director, Android

As Android has grown exponentially over the past ten years, we’ve also seen our developer community grow dramatically. In countries like China, India, and Brazil, the number of developers using our IDE almost tripled – in just two years. With such growth, we feel an even greater responsibility to invest in our developer experience. Guided by your feedback, we’ve focused our efforts on making mobile development fast and easy, helping you get more users by making apps radically smaller, and increasing engagement to keep users coming back. We’re also pretty excited to see Android Things go to 1.0, creating new opportunities for you to develop – everything from major consumer devices, to cool remote control vehicles! As Day 1 of Google I/O kicks off, let’s take a closer look at these major themes from the Developer Keynote:

Development: making mobile development fast and easy

  • Android Jetpack — Today, we announced Android Jetpack, designed to accelerate your app development. Android Jetpack is the next generation of Android components, bringing together the benefits of the Support Library — backwards compatibility and immediate updates — to a larger set of components, making it quick and easy to build robust, high quality apps. Android Jetpack manages activities like background tasks, navigation, and lifecycle management, so you can eliminate boilerplate code and focus on what makes your app great. Android Jetpack is designed to work well with Kotlin, saving you even more code with Android KTX. The new Android Jetpack components released today include WorkManager, Paging, Navigation, and Slices.

  • Kotlin — Since announcing support for Kotlin last year, the developer community has embraced the language. Most importantly, 95% of developers tell us they are very happy with using Kotlin for their Android development. And, the more developers use it, the more that number rises. The number of Play Store apps using Kotlin grew 6x in the last year. 35% of pro developers use it, and that number is growing each month. We are continuing to improve the Kotlin developer experience across our libraries, tooling, runtime, documentation and training. Android KTX is launching today as part of Android Jetpack to optimize the Kotlin developer experience. Tooling continues to improve with Android Studio, Lint support, and R8 optimizations. We have even tuned the Android Runtime (ART) in Android P, so that apps built with Kotlin can run faster. We have rolled out Kotlin code snippets in our official documentation, and are publishing a Kotlin version of the API reference documentation today. Earlier this week, we launched a new Kotlin Bootcamp on Udacity, which is a great resource for developers who are new to Kotlin. Lastly, we now have a Kotlin specialization in the Google Developers Experts Program. If you still haven’t used Kotlin, I hope you you give it a try.
  • Android Studio 3.2 CanaryAndroid Studio 3.2 features tools for Android Jetpack including a visual Navigation Editor and new code refactoring tools. The canary release also includes build tools to create the new Android App Bundle format, Snapshots in the Android Emulator for fast start time, new R8 optimizer for smaller download and install app code size, a new Energy Profiler to measure app impact on battery life, and more. You can download the latest version of Android Studio 3.2 from the canary channel download page.

Distribution: making apps radically smaller

Introducing Android App Bundle.

  • Android App Bundle & Google Play Dynamic Delivery — Introducing the new app model for Android. Dramatically reduce app size with a new publishing format—the Android App Bundle. In Android Studio, you’ll now build an app bundle that contains everything your app needs for any device—all the languages, every device screen size, every hardware architecture. Then, when a user downloads your app, Google Play’s new Dynamic Delivery will only deliver the code and resources matching the user’s device. People see a smaller install size on the Play Store, can download your app more quickly, and save space on their devices. An example of all resources being delivered to a device via a legacy APK and an example of Dynamic Delivery serving just what’s needed to a device.
    (Left) An example of all resources being delivered to a device via a legacy APK.
    (Right) An example of Dynamic Delivery serving just what’s needed to a device.
  • Dynamic features via the Android App Bundle — The Android App Bundle also enables modularization so that you can deliver features on-demand, instead of during install. You can build dynamic feature modules in the latest Android Studio canary release. Join our beta program to publish them on Google Play.
  • Google Play Console — New features and reports in the Play Console will help you improve your app’s performance and grow your business. Read about the improvements to the dashboard, statistics, Android vitals, pre-launch report, acquisition report, and subscriptions dashboard. You can also upload, test, and publish apps using our new publishing format, the Android App Bundle.
  • Google Play Instant — After launching in beta at GDC, today we announced that all game developers can build instant apps and we’re thrilled to welcome Candy Crush Saga. Google Play Instant is now available on over 1 billion devices worldwide from the Play Store, search, social and most places you can tap a link. To make instant apps easier to build, we are launching a Unity plugin and beta integration with Cocos creator this week. Recently, we’ve started testing Google Play Instant compatibility with AdWords, allowing people to try out games directly from ads, across all the channels reached by Universal App campaigns.

Engagement: bringing users back more and more.

  • Slices Slices are UI templates that display a rich array of dynamic, and interactive content from your app, across Android and within Google surfaces. Slices can include live-data, scrolling content, inline actions, and deep-linking into your app so users can do everything from playing music to checking reservation updates. Slices can also contain interactive controls like toggles and sliders. You can get started building Slices today, and they will begin appearing for users soon.
Check reservations with Slices.Control music with Slices.Call a Lyft using Slices.
  • Actions — Actions are a new way to make your app’s capabilities and content more accessible, so that people can easily get to it at the right moment. App Actions will appear to users based on usage and relevance, across multiple Google and Android surfaces, such as the Google Search App, the Play Store, the Google Assistant, and the Launcher. App Actions will be available for all developers to try soon, please sign up here if you’d like to be notified. You can also choose to build a Conversational Action as a companion experience to your app. This works on a variety of Assistant-enabled devices, such as speakers and smart displays. Both types of Actions use a new common catalog of intents.

Actions are a new way to make your app's capabilities and content more accessible, so that people can easily get to it at the right moment.

Smarter devices: a powerful platform for IoT devices

  • Android Things 1.0 Android Things is Google’s managed OS that enables developers to build and maintain Internet of Things devices at scale. Earlier this year at CES, we announced Lenovo, Harman, LG, and iHome are all building Assistant-enabled products powered by Android Things. Introducing Android Things 1.0! After a developer preview with over 100,000 SDK downloads and feedback from more than 10,000 developers, we announced Android Things 1.0 this week. Four new System-on-Modules (SoMs) are now supported on the platform with guaranteed long-term support for three years and additional options for extended support, making it easier to go from prototypes to production. To make product development more seamless than ever, the accompanying Android Things Console is also ready for production. It helps developers easily manage and update their devices with the latest stability fixes and security updates provided by Google.

To get started with Android Things, visit our developer site and the new Community Hub to explore kits, sample code, community projects, and join Google’s IoT Developers Community to stay updated. We introduced a limited program to partner with the Android Things team for technical guidance and support building your product. If your company is interested, sign up for our OEM Partner Program.
In addition to all these new developments, we’re on the ground in over 140 countries, growing and expanding the developer community through programs such as Women Techmakers and Google Developer Groups (GDGs). We’re investing in training programs like Google Developers Certification, building more courses through Udacity and other partners, to help developers deepen their technical capability. Today, 225 Google Developers Agency Program members from 50 agencies in 15 countries, are Android Certified. As part of our Google Developers Experts Program, we also now have more than 90 Android Developer Experts around the world actively supporting developers, start-ups and companies to build and launch innovative apps.
We also continue to recognize the great work from top app and game developers. This year, we held our third annual Google Play Awards. The nominees represent some of the best experiences available on Android, with an emphasis on overall quality, strong design, technical performance, and innovation. Check out the winners and nominees.
During Google I/O, attendees and viewers have an opportunity to dive deep with 48 Android & Play breakout sessions. Thank you for all your wonderful feedback, and please keep giving us your advice on where we should go next.

I/O 2018: Everything new in the Google Play Console

Posted by Tian Lim, VP of UX and Product, Google Play

Google Play connects a thriving ecosystem of developers to people using more than 2 billion active Android devices around the world. In fact, more than 94 billion apps were installed from Google Play in the last year alone. We’re continuing to empower Android developers with new features in the Play Console to help you improve your app’s performance and grow your business. And, at Google I/O 2018, we’re introducing our vision for a new Android app model that is modular and dynamic.

Benefit from size savings with the Android App Bundle

The Android App Bundle is Android’s new publishing format, with which you can more easily deliver a great experience in a smaller app size, and optimize for the wide variety of Android devices and form factors available. The app bundle includes all your app’s compiled code and resources, but defers APK generation and signing to Google Play. You no longer have to build, sign, and manage multiple APKs.

Google Play’s new app serving model, called Dynamic Delivery, uses your app bundle to generate and serve optimized APKs for each user’s device configuration. This means people download only the code and resources they need to run your app. People see a smaller install size on the Play Store, can install your app more quickly, and save space on their devices.

(Left) An example of all resources being delivered to a device via a legacy APK.
(Right) An example of Dynamic Delivery serving just what’s needed to a device.

With the Android App Bundle, you’re also able to add dynamic feature modules to your app. Through Dynamic Delivery, your users can download your app’s dynamic features on-demand, instead of during the initial install, further reducing your app’s download size. To publish apps with dynamic feature modules, apply to join the beta.

Start using the Android App Bundle in the latest Android Studio canary release. Test your release using the testing tracks in the Play Console before pushing to production. Watch these I/O sessions to hear from the team as they introduce the new app model:

Fix quality and performance issues in your app or game

An internal study Google ran last year found that over 40% of one-star reviews on the Play Store mentioned app stability as an issue. Conversely, people consistently reward the best performing apps with better ratings and reviews, leading to better rankings on Google Play and more installs. Not only that, but people tend to be more engaged and willing to spend more time and money in those apps. To help you understand and fix quality issues we’re improving a number of features in the Google Play Console.

  • Use the new internal test track to push your app to up to 100 internal testers in seconds before you release them to alpha, beta, or production. You can also have multiple closed test tracks for different versions of your app, before pushing them to open betas or production.
  • The pre-launch report summarizes issues found in alpha or beta versions of your app, based on automated testing on popular devices in Firebase Test Lab. There are several new features to help you test the parts of your app or game that crawlers find harder to reach: create demo loops for games written with OpenGL, record scripts in Android Studio for the test crawler to follow, identify deep links, and provide credentials to go behind logins. In addition to reporting crashes, performance and security issues, and taking screenshots of the crawled screens, the report will soon identify accessibility issues you should fix to ensure a positive user experience for the widest audience.
  • Android vitals now analyzes data about startup time and permission denials in addition to battery, rendering, and stability. The revamped dashboard highlights crash rate, ANR rate, excessive wakeups, and stuck wake locks: the core vitals developers should give attention to. All other vitals, when applicable to your type of app or game, should be monitored to ensure they aren’t having a negative effect. You’ll also see anomalies in any vitals, when there’s a sudden change you should be aware of, and benchmarks so that you can compare your app’s performance to that of similar apps. Exhibiting bad behavior in vitals will negatively affect the user experience in your app and is likely to result in bad ratings and poor discoverability on the Play Store.

Watch these I/O sessions where we introduce the new features and share examples of how developers are using them successfully:

Improve your store performance and user acquisition

The Play Console has tools and reports to help your whole team understand and improve your app’s store performance and business metrics. The Play Console’s access management controls were recently improved so you can more easily grant access to your whole team while having granular control over which data and tools they can see and use.

  • The app dashboard has been improved so you can quickly digest need-to-know information and take action. The dashboard now shows more data, is easier to read, and is customizable. This should be your first stop to understand the latest activity around your app or game.
  • You can now configure the statistics report to show you how your instant apps are performing. See how many people are launching your instant app by different dimensions and how many go on to install the full app on their device. All app and game developers can build instant experiences today. Learn more in the instant apps documentation.
  • The acquisition report will start showing you more data about how people find your app and whether they go on to install it and make purchases. You can now see average revenue per user and retention benchmarks, to compare your app’s performance to similar apps, at every state of the acquisition funnel. Organic breakdown, rolling out soon, will separate the number of people who find your store listing by searching the Play Store from those who get there via browsing. You will also be able to see what search terms are driving the most traffic, conversions, and purchases. With these improvements, you can further optimize your efforts to grow and retain a valuable audience.
  • Order management has also been updated to enable you to offer partial refunds for in-app products and subscriptions.

Watch these I/O sessions where we introduce the new features and share examples of how developers are using them successfully:

Grow and optimize your subscriptions business

Subscriptions continue to see huge growth, with subscribers on Google Play growing over 80% year over year. Google Play Billing offers developers useful features to acquire, engage, and retain subscribers, and gives users a consistent and familiar purchase flow. We’re making improvements to help you prepare your subscriptions business for the future and to give users more information on their subscriptions.

  • With the Google Play Billing Library, you can easily integrate new features with minimal coding. Now with newly-released version 1.1, you can upgrade subscriptions without changing the renewal date. Also, you will soon be able to make price changes to existing SKUs.
  • The new subscriptions center on Google Play lets people manage their active subscriptions, including fixing payment issues or restoring canceled subscriptions. You can create deep links so your users can directly access subscription management options on the Play Store. Soon, people who cancel subscriptions will have the option to leave feedback stating why, which you will have access to in the Play Console.
  • Subscription reports in the Play Console have been updated to help you better understand your retention and churn across multiple subscriptions, times, and territories. You can now measure whether features such as free trials, account holds, and grace periods are successful in acquiring and retaining users.

Watch our I/O session where we explain the new features:

Prepare for the upcoming Play requirement for target API level

As we have announced, Google Play will require new apps (from August 2018) and app updates (from November 2018) to target API level 26 or higher. For more information and practical guidance on preparing for the new requirement, watch the I/O session, Migrating your existing app to target Android Oreo and above, and review our migration guide. If you develop an SDK or library that’s used by developers, make sure it’s ready to target Oreo too and sign up to receive news and updates for SDK providers.

Get more resources to help you succeed on Google Play

To find out more about all these new features, learn best practices, understand how other developers are finding success, and hear from the teams building these features, watch the Android & Play sessions at I/O 2018. For more developer resources about how to improve your app’s performance on Google Play, read this guide to the Google Play Console and visit the Android developers website. Finally, to stay up to date, sign up to our newsletter and follow us on Twitter, LinkedIn, and Medium where we post regularly.

How useful do you find this blogpost?

Android Studio 3.2 Canary

Today at Google I/O 2018 we announced the latest preview of Android Studio 3.2 which includes an exciting set of features that support the Android P Developer Preview, the new Android App Bundle, and Android Jetpack. Download Android Studio 3.2 from our canary release channel today to explore one of the most feature rich releases of the year.

Android Jetpack is a set of libraries, developer tools and architectural guidance to help make it quick and easy to build great Android apps. It provides common infrastructure code so you can focus on what makes your app unique. Android Studio 3.2 includes a wide set of tools that support Jetpack from a visual Navigation Editor that uses the Navigation API, templates for Android Slices APIs, to refactoring tools to migrate to the new Android support libraries in Jetpack — AndroidX.

The canary 14 release of Android Studio 3.2 also supports the new Android app model that is the evolution of the APK format, the Android App Bundle. With no code changes, Android Studio 3.2 will help you create a new Android App Bundle and have it ready for publishing on Google Play.

There are 20 major features in this release of Android Studio spanning from ultra fast Android Emulator Snapshots, Sample Data in the Layout Editor, to a brand new Energy Profiler to measure battery impact of your app. If any of these features sound interesting, download the preview of Android Studio 3.2 today.

To see these features demoed in action and to get a sneak peak at other features we are working on, check out the Google I/O 2018 session – What’s new in Android Development Tools.

What’s new in Android Development Tools – Google I/O 2018

Below is a full list of new features in Android Studio 3.2, organized by key developer flows.

Develop

  • Navigation Editor – As a part of Jetpack, Android Studio 3.2 features a new way to design the navigational structure between the screens of your app. The navigation editor is a visual editor which allows you to construct XML resources that support using the new Navigation Component in Jetpack.

Navigation Editor

  • AndroidX Refactoring Support – One of the components of Jetpack is rethinking and refactoring the Android Support Libraries to a new Android extension library (AndroidX) namespace. As a part of the early preview of the AndroidX, Android Studio 3.2 helps you through this migration with a new refactoring action. To use the feature, navigate to: RefactorRefactor to AndroidX. As an additional enhancement to the refactoring process, if you have any maven dependencies that have not migrated to the AndroidX namespace, the Android Studio build system will automatically convert those project dependencies as well. You can manually control the conversion process by toggling the android.enableJetifier = true flag in your gradle.properties file. While the refactoring action supports common project configurations, we recommend that you save a backup of your project before you refactor. Learn more.

AndroidX Refactoring Support

  • Sample Data – Many Android layouts have runtime data that can make it difficult to visualize the look and feel of a layout during the design stage of app development. Sample Data in the Layout Editor allows you to use placeholder data to aid in the design of your app. From RecyclerView, ImageView to TextView, you can add built-in sample data to populate these views via a popup-window in the Layout Editor. To try out the feature, add a RecyclerView to a new layout, and then click on the new tools design-time attributes icon and choose a selection out of the carousel of sample data templates.

Design Time Sample Data

  • Material Design Update – Material Design continues to evolve not only as a design system but also in implementation on Android. When you start migrating from the Android Design support library to the new MaterialComponents app theme and library, Android Studio 3.2 will offer you access to new and updated widgets such as BottomAppBar, buttons, cards, text fields, new font styles and more. Learn more.

New Material Design Components

  • Slices support – Slices is a new way to embed portions of your app content in other user interface surfaces in the Android operating system. Slices is backwards compatible to Android 4.4 KitKat (API 19) and will enable you to surface app content in Google Search suggestions. Android Studio 3.2 has a built in template to help you extend your app with the new Slice Provider APIs as well as new lint checks to ensure that you’re following best practices when constructing the slices. To get started right-click on a project folder, and navigate to NewOtherSlice Provider. Learn how to test your slice interactions by checking out the getting started guide.

Slices Provider Template

  • CMakeList Editing Support – Android Studio supports CMake build scripts for your app’s C/C++ code. With this release of Android Studio 3.2, code completion and syntax highlighting now works on common CMakeList commands.

CMakeList Code Completion

  • What’s New Assistant Android Studio 3.2 has a new assistant panel that opens automatically after an update to inform you about the latest changes to the IDE. You can also open the panel by navigating to Help → What’s New in Android Studio.

What’s New Assistant

  • IntelliJ Platform Update – Android Studio 3.2 includes the IntelliJ 2018.1 platform release, which has many new features such as data flow analysis, partial Git commits support, and a ton of new code analysis enhancements. Learn more.

Build

  • Android App Bundle The Android App Bundle is the new app publishing format designed to help you deliver smaller APKs to your users. Google Play has a new Dynamic Delivery platform that accepts your Android App Bundle, and delivers only the APKs that you need on a specific device. Android Studio 3.2 enables you to create and test an Android App Bundle. As long as you are running the latest Android Gradle plugin (com.android.tools.build:gradle:3.2.0-alpha14), you can rebuild your code as an app bundle and get the benefit of smaller APKs based on language, screen density, and ABIs with no changes to your app code. To get started, navigate to Build Build Bundle / APK or BuildGenerate Signed Bundle / APK Learn more.

Build Android App Bundle

  • D8 Desugaring – In some cases, new Java Language features require new bytecodes and language APIs, however older Android devices may not support these features. Desugaring allows you to use these features on older devices by replacing new bytecodes and language APIs with older ones during the build process. Desugaring was initially introduced with Android Studio 3.0 as a separate tool, and in Android Studio 3.1, we integrated the desugaring step into the D8 tool as an experimental feature, reducing overall build time. Now D8 desugaring is turned on by default for Android Studio 3.2. You can you can now use most of the latest language changes while targeting older devices.
  • R8 Optimizer – During the app build process, Android Studio historically used ProGuard to optimize and shrink Java language bytecode. Starting with Android Studio 3.2, we are starting the transition to use R8 as a replacement to ProGuard. To experiment with R8, add android.enableR8=true to your gradle.properties file. R8 is still experimental, so we do not recommend publishing your app using R8 yet. Learn more.

Enable R8 in Android Studio

Test

  • Emulator Snapshots With Quickboot in the Android Emulator we enabled you to launch the emulator in under 6 seconds. With Android Studio 3.2 we have extended this feature to enable you to create snapshots at any emulator state and start them iun under 2 seconds. When testing and developing your app, you can pre-configure an Android Virtual Device (AVD) snapshot with the presets, apps, data and settings that you want in-place, and repeatedly go back to the same snapshot. Snapshots load in under 2 seconds and you can launch to specific snapshots from the Android Emulator Extended Controls panel, the command-line ( ./adb emu avd snapshot load snap_2018-04-29_00-01-12 ) or from within Android Studio.

Android Emulator Snapshots

  • Screen Record in Android Emulator Normally creating a screen recording of your app screen would only work for Android 4.4 KitKat (API 19) and above with no audio, with limited Android Emulator support. With the latest Android Emulator (v27.3+), you can take screen recordings on any API level with audio. Plus, there is a built-in conversion to output to GIF and WebM. You can trigger the new screen record feature via the Android Emulator Extended Controls panel, command line ( ./adb emu screenrecord start --time-limit 10 /sample_video.webm ), and from Android Studio.

Screen record in Android Emulator

  • Virtual Scene Camera for Android Emulator – Developing and testing apps with ARCore is now even easier with the new Virtual Scene camera, which allows you to iterate on your augmented reality (AR) experience within a virtual environment. The emulator is calibrated to work with ARCore APIs for AR apps and allows you to inject virtual scene bitmap images. The virtual scene camera can also be used as a regular HAL3 compatible camera. Open the built-in Android camera app inside the Android Emulator to get started. By default, the new virtual scene camera is the rear camera for new Android Virtual Devices created with Android Studio 3.2. Learn more.

Virtual Scene Camera in Android Emulator

  • ADB Connection Assistant – To help troubleshoot your Android device connections via ADB, Android Studio 3.2 has a new assistant. The ADB Connection Assistant walks you through common troubleshooting steps to connect your Android device to your development machine. You can trigger the assistant from the Run Dialogue box or by navigating to ToolsConnection Assistant .

ADB Connection Assistant

Optimize

  • Energy Profiler Battery life is a key concern for many phone users, and your app may impact battery life more than you realize. The new Energy Profiler in the performance profiler suite can help you understand the energy impact of your app on an Android device. You can now visualize the estimated energy usage of system components, plus inspect background events that may contribute to battery drain. To use the energy profiler, ensure you are connected to an Android device or emulator running Android 8.0 Oreo (API 26) or higher. Learn more.

Energy Profiler

  • System Trace The new System Trace feature in the CPU Profiler allows you to inspect how your app interacts with system resources in fine-grained detail. Inspect exact timings and durations of your thread states, visualize where your CPU bottlenecks are across all cores, and add custom trace events to analyze. To use system trace, start profiling your app, click into the CPU Profiler, and then choose the System Trace recording configuration. Learn more.

System Trace

  • Profiler Sessions We now automatically save Profiler data as “sessions” to revisit and inspect later while you have Android Studio open. We’ve also added the ability to import and export your CPU recordings and heap dumps for later analysis or inspection with other tools.

Profiler Sessions

  • Automatic CPU Recording – You can now automatically record CPU activity using the Debug API. After you deploy your app to a device, the profiler automatically starts recording CPU activity when your app calls startMethodTracing(String tracePath), and stops recording when your app calls stopMethodTracing(). Similarly, you can also now automatically start recording CPU activity on app start-up by enabling this option in your run configuration.
  • JNI Reference Tracking – For those of you who have C/C++ code in your Android app, Android Studio 3.2 now allows you to inspect the memory allocations of your JNI code in the Memory Profiler. As long as you deploy your app to a device running Android 8.0 Oreo (API 26) and higher, you can drill down into the allocation call stack from your JNI reference. To use the feature, start a memory profiler session, and select the JNI Heap from the Live Allocation drop-down menu.

JNI Reference Tracking

To recap, the latest canary of Android Studio 3.2 includes these new major features:

Develop

  • Navigation Editor
  • AndroidX Refactoring
  • Sample Data
  • Material Design Update
  • Android Slices
  • CMakeList editing
  • What’s New Assistant
  • New Lint Checks
  • Intellij Platform Update

Build

  • Android App Bundle
  • D8 Desugaring
  • R8 Optimizer
Test

  • Android Emulator Snapshots
  • Screen Record in Android Emulator
  • Virtual Scene Android Emulator Camera
  • ADB Connection Assistant

Optimize

  • Energy Profiler
  • System Trace
  • Profiler Sessions
  • Automatic CPU Recording
  • JNI Reference Tracking

Check out the preview release notes for more details.

Getting Started

Download

Download the latest version of Android Studio 3.2 from the canary channel download page. If you are using a previous canary release of Android Studio, make sure you update to Android Studio Canary 14 or higher. If you want to maintain a stable version of Android Studio, you can run the stable release version and canary release versions of Android Studio at the same time. Learn more.

To use the mentioned Android Emulator features make sure you are running at least Android Emulator v27.3+ downloaded via the Android Studio SDK Manager.

We appreciate any early feedback on things you like, and issues or features you would like to see. Please note, to ensure we maintain product quality, the features you see in the canary channel may not be available in the next stable release channel until they are ready for stable usage. If you find a bug or issue, feel free to file an issue. Connect with us — the Android Studio development team ‐ on our Google+ page or on Twitter.

Use Android Jetpack to Accelerate Your App Development

Posted by Chris Sells, Benjamin Poiesz, Karen Ng, Product Management, Android Developer Tools

Today we’re excited to introduce Android Jetpack, the next generation of components, tools and architectural guidance to accelerate your Android app development.

Android Jetpack was inspired by the Support Library, a set of components to make it easy to take advantage of new Android features while maintaining backwards compatibility; it’s currently used by 99% of every app in the Play Store. Following on that success, we introduced the Architecture Components, designed to make it easier to deal with data in the face of changes and the complications of the app lifecycle. Since we introduced those components at I/O just one year ago, an overwhelming number of you have adopted them. Companies such as LinkedIn, Zillow and iHeartRadio are seeing fewer bugs, higher testability and more time to focus on what makes their app unique.

The Android developer community has been clear — not only do you like what we’ve done with these existing components, but we know that you want more! And so more is what you get.

What is Android Jetpack?

Android Jetpack is a set of components, tools and guidance to make great Android apps. The Android Jetpack components bring together the existing Support Library and Architecture Components and arranges them into four categories:

Android Jetpack components are provided as “unbundled” libraries that are not part of the underlying Android platform. This means that you can adopt each component at your own speed, at your own time. When new Android Jetpack functionality is available, you can add it to your app, deploy your app to the Play Store and give users the new features all in a single day (if you’re quick)! The unbundled Android Jetpack libraries have all been moved into the new androidx.* namespace (as described in detail in this post).

In addition, your app can run on various versions of the platform because Android Jetpack components are built to provide their functionality independent of any specific version, providing backwards compatibility.

Further, Android Jetpack is built around modern design practices like separation of concerns and testability as well as productivity features like Kotlin integration. This makes it far easier for you to build robust, high quality apps with less code. While the components of Android Jetpack are built to work together, e.g. lifecycle awareness and live data, you don’t have to use all of them — you can integrate the parts of Android Jetpack that solve your problems while keeping the parts of your app that are already working great.

We know that these benefits are important to you because of feedback like this:

“We had been thinking of trying out MVVM in our code base. Android Architecture Components gave us an easy template to implement it. And it’s helped make our code more testable as well; the ability to unit test ViewModels has definitely increased code robustness.”

— Sumiran Pradhan, Sr. Engineer, Zillow

If you want to learn more about how companies are using Android Jetpack components, you can read the developer stories on the Android Developer site.

And finally, as you can see from the Android Jetpack diagram above, today we’re announcing new components as well.

What’s New

Android Jetpack comes with five new components:

  • WorkManager alpha release
  • Navigation alpha release
  • Paging stable release
  • Slices alpha release
  • Android KTX (Kotlin Extensions) alpha release

WorkManager

The WorkMananager component is a powerful new library that provides a one-stop solution for constraint-based background jobs that need guaranteed execution, replacing the need to use things like jobs or SyncAdapters. WorkManager provides a simplified, modern API, the ability to work on devices with or without Google Play Services, the ability to create graphs of work, and the ability to query the state of your work. Early feedback is very encouraging but we love to make sure that your use cases are covered, too. You can see what we have so far and provide feedback on our alpha on the WorkManager component.

Navigation

While activities are the system provided entry points into your app’s UI, their inflexibility when it comes to sharing data between each other and transitions has made them a less than ideal architecture for constructing your in-app navigation. Today we are introducing the Navigation component as a framework for structuring your in-app UI, with a focus on making a single-Activity app the preferred architecture. With out of the box support for Fragments, you get all of the Architecture Components benefits such as Lifecycle and ViewModel while allowing Navigation to handle the complexity of FragmentTransactions for you. Further, the Navigation component allows you to declare transitions that we handle for you, automatically builds the correct Up and Back behavior, includes full support for deep links, and provides helpers for connecting Navigation into the appropriate UI widgets, like the navigation drawer and bottom navigation. But that’s not all! The Navigation Editor in Android Studio 3.2 allows you to see and manage your navigation properties visually:

The Navigation component is also in alpha and we’d love your feedback.

Paging

Data presented in an app can be large and costly to load, so it’s important to avoid downloading, creating, or presenting too much at once. The Paging component version 1.0.0 makes it easy to load and present large data sets with fast, infinite scrolling in your RecyclerView. It can load paged data from local storage, the network, or both, and lets you define how your content gets loaded. It works out of the box with Room, LiveData, and RxJava.

Slices

And finally, to round out the set of new features making their debut in Android Jetpack is the Slices component. A “slice” is a way to surface your app’s UI inside of the Google Assistant as a result of a search:

You can learn all about the Slices component and how to integrate it into your app on the Android Developer website.

Android KTX

And last but not least, one goal of Android Jetpack takes advantage of Kotlin language features that make you more productive. Android KTX lets you transform Kotlin code like this:

view.viewTreeObserver.addOnPreDrawListener(
  object : ViewTreeObserver.OnPreDrawListener {
    override fun onPreDraw(): Boolean {
      viewTreeObserver.removeOnPreDrawListener(this)
      actionToBeTriggered()
      return true
    }
});

into more concise Kotlin code like the following:

view.doOnPreDraw { actionToBeTriggered() }

This is just the first step in bringing Kotlin support to Android Jetpack components; our goal is to make Android Jetpack great for Kotlin developers (and of course Java developers!).You can read more about Android KTX on the Android Developer web site.

Getting Started

You can get started with Android Jetpack at developer.android.com/jetpack. You’ll find docs and videos for Android Jetpack, see what’s new in Android Jetpack components, participate in the community and give us feedback. We’ve also created a YouTube playlist devoted to Android Jetpack, so you can tune in for information about Android Jetpack, components, tools and best practices.

Getting Started with Android Jetpack will tell you how to bring the Android Jetpack components into your existing apps and help you get started with new Android Jetpack apps. Android Studio 3.2 has great tooling support for Android Jetpack. For building new apps, use the Activity & Fragment+ViewData activity, which you can get to from File | New | New Project in Android Studio:

What’s Next

With Android Jetpack, we’re taking the benefits of the Support Library and the Architecture Components and turning it up a notch with new components, Android Studio integration and Kotlin support. And while Android Jetpack provides the next generation components, tools and guidance to accelerate your Android development, we’ve got a lot more that we want to do and we want your help. Please go to developer.android.com/jetpack and let us know what we can do to make your experience building Android apps even better.

What’s new in Android P Beta

android P logo

Posted By Dave Burke, VP of Engineering

android P logo

Earlier today we unveiled a beta version of Android P, the next release of Android. Android P puts AI at the core of the operating system and focuses on intelligent and simple experiences. You can read more about the new user features here.

For developers, Android P beta offers a range of ways to take advantage of these new smarts, especially when it comes to increasing engagement with your apps.

You can get Android P beta on Pixel devices by enrolling here. And thanks to Project Treble, you can now get the beta on top devices from our partners as well — Essential, Nokia, Oppo, Sony, Vivo, and Xiaomi, with others on the way.

Visit android.com/beta for the full list of devices, and details on how to get Android P beta on your device. To get started developing with Android P beta, visit developer.android.com/preview.

A smarter smartphone, with machine learning at the core

Android P makes a smartphone smarter, helping it learn from and adapt to the user. Your apps can take advantage of the latest in machine intelligence to help you reach more users and offer new kinds of experiences.

Adaptive Battery

Adaptive battery in Settings

Battery is the number one priority we hear from mobile phone users, regardless of the device they are using. In Android P we’ve partnered with DeepMind on a new feature we call Adaptive Battery that optimizes how apps use battery.

Adaptive Battery uses machine learning to prioritize access to system resources for the apps the user cares about most. It puts running apps into groups with different restrictions using four new “App Standby buckets” ranging from “active” to “rare”. Apps will change buckets over time, and apps not in the “active” bucket will have restrictions in: jobs, alarms, network and high-priority Firebase Cloud Messages.

If your app is optimized for Doze, App Standby, and Background Limits, Adaptive Battery should work well for you right out of the box. We recommend testing your app in each of the four buckets. Check out the documentation for the details.

App Actions

App Actions are a new way to raise the visibility of your app to users as they start their tasks. They put your app’s core capabilities in front of users as suggestions to handle their tasks, from key touch-points across the system like the Launcher and Smart Text Selection, Google Play, Google Search app, and the Assistant.

Actions use machine learning to surface just the right apps to users based on their context or recent interactions. Because Actions highlight your app where and when it’s most relevant, they’re a great way to reach new users and re-engage with existing users.

App Actions surfacing apps in the All Apps screen.

To support App Actions, just define your app’s capabilities as semantic intents. App Actions use the same catalog of common intents as conversational Actions for the Google Assistant, which surface on voice-activated speakers, Smart displays, cars, TVs, headphones, and more. There’s no API surface needed for App Actions, so they will work on any supported Android platform version.

Actions will be available soon for developers to try, sign up here if you’d like to be notified.

Slices

Slice template example

Along with App Actions we’re introducing Slices, a new way for your apps to provide remote content to users. With Slices you can surface rich, templated UI in places like Google Search and Assistant. Slices are interactive with support for actions, toggles, sliders, scrolling content, and more.

Slice template example

Slices are a great new way to engage users and we wanted them to be available as broadly as possible. We added platform support in Android P, and we built the developer APIs and templates into Android Jetpack, our new set of libraries and tools for building great apps. Through Jetpack, your Slices implementation can target users all the way back to Kitkat — across 95% of active Android devices. We’ll also be able to update the templates regularly to support new use cases and interactions (such as text input).

Slice template example

Check out the Getting Started guide to learn how to build with Slices — you can use the SliceViewer tool to see how your Slices look. Over time we plan to expand the number of places that your Slices can appear, including remote display in other apps.

Smart reply in notifications

The Smart Reply feature in Gmail and Inbox are excellent examples of how machine intelligence can positively transform an app experience. In Android P we’ve brought Smart Replies to Notifications with an API to let you provide this optimization to your users. To make it easier to populate replies in your notifications, you’ll soon be able to use ML Kit — see developers.google.com/mlkit for details.

Text Classifier

In Android P we’ve extended the ML models that identify entities in content or text input to support more types like Dates and Flight Numbers and we’re making those improvements available to developers through the TextClassifier API. We’re also updating the Linkify API that automatically creates links to take advantage of these TextClassification models and have enriched the options the user has for quick follow on actions. Developers will have additional options of linkifying any of the entities recognized by the TextClassifier service. Smart Linkify has significant improvements in accuracy and precision of detection and performance.

Even better, the models are now updated directly from Google Play, so your apps can take advantage of model improvements using the same APIs. Once the updated models are installed, all of the entity recognition happens on-device and data is not sent over the network.

Simplicity

We put a special emphasis on simplicity in Android P, evolving Android’s UI to streamline and enhance user tasks. For developers, the changes help improve the way users find, use, and manage your apps.

New system navigation

We’re introducing a new system navigation in Android P that gives users easier access to Home, Overview, and the Assistant from a single button on every screen. The new navigation simplifies multitasking and makes discovering related apps much easier. In the Overview, users have a much larger view of what they were doing when they left each app, making it much easier to see and resume the activity. The Overview also provides access to search, predicted apps, and App Actions, and takes users to All Apps with another swipe.

New system navigation in Android P giving faster access to recents and predicted apps.

Text Magnifier

In Android P we’ve also added a new Magnifier widget, designed to make it easier to select text and manipulate the text cursor in text. By default, classes that extend TextView automatically support the magnifier, but you can use the Magnifier API to attach it to any custom View, which opens it up to a variety of uses.

Background restrictions

Battery restrictions in Android P.

We’re making it simple for users to identify and manage apps that are using battery in the background. From our work on Android Vitals, Android can detect battery-draining app behaviors such as excessive wake locks and others. Now in Android P, Battery Settings lists such apps and lets users restrict their background activities with a single tap.

When an app is restricted, its background jobs, alarms, services, and network access are affected. To stay off of the list, pay attention to your Android Vitals dashboard in the Play Console, which can help you understand performance and battery issues.

Background Restrictions ensures baseline behaviors that developers can build for across devices and manufacturers. Although device makers can add restrictions on top of the core set, they must provide user controls via Battery Settings.

We’ve added a standard API to let apps check whether they are restricted, as well as new ADB commands to let you manually apply restrictions to your apps for testing. See the documentation for details. We also plan to add restrictions related metrics to your Play Console Android Vitals dashboard in the future.

Enhanced audio with Dynamics Processing

Android P introduces a new Dynamics Processing Effect in the Audio Framework that lets developers improve audio quality. With Dynamics Processing, you can isolate specific frequencies and lower loud or increase soft sounds to enhance the acoustic quality of your application. For example, your app can improve the sound of someone who speaks quietly in a loud, distant or otherwise acoustically challenging environment.

The Dynamics Processing API gives you access to a multi-stage, multi-band dynamics processing effect that includes a pre-equalizer, a multi-band compressor, a post-equalizer and a linked limiter. It lets you modify the audio coming out of Android devices and optimize it according to the preferences of the listener or the ambient conditions. The number of bands and active stages is fully configurable, and most parameters can be controlled in realtime, such as gains, attack/release times, thresholds, etc.

To see what you can do with the Dynamics Processing Effect, please see the documentation.

Chart showing Dynamics processing levels vs standard audible levels.

Security

Biometric prompt

Biometric prompt is displayed by the system.

Android P provides a standard authentication experience across the growing range of biometric sensors. Apps can use the new BiometricPrompt API instead of displaying their own biometric auth dialogs. This new API replaces the FingerprintDialog API added in DP1. In addition to supporting Fingerprints (including in-display sensors), it also supports Face and Iris authentication, providing a system-wide consistent experience. There is a single USE_BIOMETRIC permission that covers all device-supported biometrics. FingerprintManager and the corresponding USE_FINGERPRINT permission are now deprecated, so please switch to BiometricPrompt as soon as possible.

Protected Confirmation

Android P introduces Android Protected Confirmation, which use the Trusted Execution Environment (TEE) to guarantee that a given prompt string is shown and confirmed by the user. Only after successful user confirmation will the TEE then sign the prompt string, which the app can verify.

Stronger protection for private keys

We’ve added StrongBox as a new KeyStore type, providing API support for devices that provide key storage in tamper-resistant hardware with isolated CPU, RAM, and secure flash. You can set whether your keys should be protected by a StrongBox security chip in your KeyGenParameterSpec.

Android P Beta

Bringing a new version of Android to users takes a combined effort across Google, silicon manufacturers (SM), device manufacturers (OEMs), and carriers. The process is technically challenging and can take time — to make it easier, we launched Project Treble last year as part of Android Oreo. Since then we’ve been working with partners on the initial bring-up and now we’re seeing proof of what Treble can do.

Today we announced that 6 of our top partners are joining us to release Android P Beta on their devices — Sony Xperia XZ2, Xiaomi Mi Mix 2S, Nokia 7 Plus, Oppo R15 Pro, Vivo X21UD and X21, and Essential PH‑1. We’re inviting early adopters and developers around the world to try Android P Beta on any of these devices — as well as on Pixel 2, Pixel 2 XL, Pixel, and Pixel XL.

You can see the full list of supported partner and Pixel devices at android.com/beta. For each device you’ll find specs and links to the manufacturer’s dedicated site for downloads, support, and to report issues. For Pixel devices, you can now enroll your device in the Android Beta program and automatically receive the latest Android P Beta over-the-air.

Try Android P Beta on your favorite device today and let us know your feedback! Check out our post on Faster Adoption with Project Treble for more details.

Make your apps compatible

With more users starting to get Android P Beta on their devices, now is the time to test your apps for compatibility, resolve any issues, and publish an update as soon as possible. See the migration guide for steps and a recommended timeline.

To test for compatibility, just install your current app from Google Play onto a device or emulator running Android P Beta and work through the user flows. The app should run and look great, and handle the Android P behavior changes properly. In particular, pay attention to adaptive battery, Wi-Fi permissions changes, restrictions on use of camera and sensors from the background, stricter SELinux policy for app data, and changes in TLS enabled by default, and Build.SERIAL restriction.

Compatibility through public APIs

It’s important to test your apps for uses of non-SDK interfaces. As noted previously, in Android P we’re starting a gradual process to restrict access to selected non-SDK interfaces, asking developers — including app teams inside Google — to use the public equivalents instead.

If your apps are using private Android interfaces and libraries, you should move to using public APIs from the Android SDK or NDK. The first developer preview displayed a toast warning for uses of non-SDK interfaces — starting in Android P Beta, uses of non-SDK interfaces that are not exempted will generate errors in your apps — so you’ll now get exceptions thrown instead of a warning.

To help you identify reflective usage of non-SDK APIs, we’ve added two new methods in StrictMode. You can use detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI, and you can use permitNonSdkApiUsage() to suppress StrictMode warnings for those accesses. This can help you understand your app’s use of non-SDK APIs — even if the APIs are exempted at this time, it’s best to plan for the future and eliminate their use.

In cases where there is no public API that meets your use-case, please let us know immediately. We want to make sure that the initial rollout only affects interfaces where developers can easily migrate to public alternatives. More about the restrictions is here.

Test with display cutout

It’s also important to test your app with display cutout. Now you can use several of our partner devices running Android Beta to make sure your app looks its best with a display cutout. You can also use the emulated cutout support that’s available on any Android P device through Developer options.

Get started with Android P

When you’re ready, dive into Android P and learn about the many new features and APIs you can take advantage of in your apps. To make it easier to explore the new APIs, take a look at the API diff reports (API 27->DP2, DP1->DP2) along with the Android P API reference. Visit the Developer Preview site for details. Also check out this video highlighting what’s new for developers in Android P Beta.

To get started with Android P, download the P Developer Preview SDK and tools into Android Studio 3.1 or use the latest version of Android Studio 3.2. If you don’t have a device that runs Android P Beta, you can use the Android emulator to run and test your app.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

Experience augmented reality together with new updates to ARCore

Three months ago, we launched ARCore, Google’s platform for building augmented reality (AR) experiences. There are already hundreds of apps on the Google Play Store that are built on ARCore and help you see the world in a whole new way. For example, with Human Anatomy you can visualize and learn about the intricacies of the nervous system in 3D. Magic Plan lets you create a floor plan for your next remodel just by walking around the house. And Jenga AR lets you stack blocks on your dining room table with no cleanup needed after your tower collapses.

ar_usecases_pr_050718.gif

As announced today at Google I/O, we’re rolling out a major update to ARCore to help developers build more collaborative and immersive augmented reality apps.

  • Shared AR experiences:Many things in life are better when you do them with other people. That’s true of AR too, which is why we’re introducing a capability called Cloud Anchors that will enable new types of collaborative AR experiences, like redecorating your home, playing games and painting a community mural—all together with your friends. You’ll be able to do this across Android and iOS.
google_justALine_GIF6_180506a.gif

Just a Line will be updated with Cloud Anchors, and available on Android & iOS in the coming weeks

  • AR all around you:ARCore now features Vertical Plane Detection which means you can place AR objects on more surfaces, like textured walls. This opens up new experiences like viewing artwork above your mantlepiece before buying it. And thanks to a capability called Augmented Images, you’ll be able to bring images to life just by pointing your phone at them—like seeing what’s inside a box without opening it.  
ARCore: Augmented Images
  • Faster AR development:With Sceneform, Java developers can now build immersive, 3D apps without having to learn complicated APIs like OpenGL. They can use it to build AR apps from scratch as well as add AR features to existing ones. And it’s highly optimized for mobile.

nyt_sceneform_050718.gif

The New York Times used Sceneform for faster AR development

Developers can start building with these new capabilities today, and you can try augmented reality apps enabled by ARCore on the Google Play Store.

Chromebooks are ready for your next coding project

This year we’re making it possible for you to code on Chromebooks. Whether it’s building an app or writing a quick script, Chromebooks will be ready for your next coding project.

Last year we announced a new generation of Chromebooks that were designed to work with your favorite apps from the Google Play store, helping to bring accessible computing to millions of people around the world. But it’s not just about access to technology, it’s also about access to the tools that create it. And that’s why we’re equipping developers with more tools on Chromebooks.

Pixelbook Android Terminal.jpg

Support for Linux will enable you to create, test and run Android and web app for phones, tablets and laptops all on one Chromebook. Run popular editors, code in your favorite language and launch projects to Google Cloud with the command-line. Everything works directly on a Chromebook.

Linux runs inside a virtual machine that was designed from scratch for Chromebooks. That means it starts in seconds and integrates completely with Chromebook features. Linux apps can start with a click of an icon, windows can be moved around, and files can be opened directly from apps.

A preview of the new tool will be released on Google Pixelbook soon. Remember to tune in to Google I/O to learn more about Linux on Chromebooks, as well as more exciting announcements.

Jeff Zients Joins Facebook Board of Directors

Facebook today announced that Jeff Zients, the CEO of Cranemere, has been appointed to the company’s board of directors and audit committee, effective May 31, 2018, immediately following Facebook’s annual meeting of stockholders. Following Zients’s appointment the board will consist of seven independent, non-employee directors out of nine total directors. “I am proud to join […]

CategoriesUncategorized

Hello World, AndroidX

Posted by Alan Viverette (/u/alanviverette), Kathy Kam (@kathykam) , Lukas Bergstrom (@lukasb)

Today, we launch an early preview of the new Android extension libraries (AndroidX) which represents a new era for the Support Library. Please previe…

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it’s not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

  • Overview – Smart Compose.gif

    Smart Compose can understand the context of an email and suggest phrases to help you write quickly and efficiently.

  • Overview – Photos.gif

    With Google Photos, we’re working on the ability for you to change black-and-white shots into color in just a tap.  

  • Overview – Smart Display.jpg

    With Smart Displays, the Google Assistant is becoming more visual.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you … to the concert poster you passed … to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We’re also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

Google Lens: real-time answers to questions about the world around you

There’s so much information available online, but many of the questions we have are about the world right in front of us. That’s why we started working on Google Lens, to put the answers right where the questions are, and let you do more with what you see.

Last year, we introduced Lens in Google Photos and the Assistant. People are already using it to answer all kinds of questions—especially when they’re difficult to describe in a search box, like “what type of dog is that?” or “what’s that building called?”

Today at Google I/O, we announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. We also announced three updates that enable Lens to answer more questions, about more things, more quickly:

First, smart text selection connects the words you see with the answers and actions you need. You can copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone. Lens helps you make sense of a page of words by showing you relevant information and photos. Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea.  This requires not just recognizing shapes of letters, but also the meaning and context behind the words. This is where all our years of language understanding in Search help.

lens_menu_050718 (1).gif

Second, sometimes your question is not, “what is that exact thing?” but instead, “what are things like it?” Now, with style match, if an outfit or home decor item catch your eye, you can open Lens and not only get info on that specific item—like reviews—but see things in a similar style that fit the look you like.

lens_clothing_inPhone.gif

Third, Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now you’ll be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.

lens_multielements_050718 (1).gif

Much like voice, we see vision as a fundamental shift in computing and a multi-year journey. We’re excited about the progress we’re making with Google Lens features that will start rolling out over the next few weeks.

Explore and eat your way around town with Google Maps

Google Maps has always helped you get where you need to go as quickly as possible, and soon, it’ll help you do even more. In the coming months, Google Maps will become more assistive and personal with new features that help you figure out what to eat, drink, and do–no matter what part of the world you’re in. So say goodbye to endless scrolling through lists of recommended restaurants or group texts with friends that never end in a decision on where to go. The next time you’re exploring somewhere new, getting together with friends, or hosting out-of-towners in your own city, you can use Google Maps to make quick decisions and find the best spots.

Find new things to do

The redesigned Explore tab will be your hub for everything new and interesting nearby. When you check out a particular area on the map, you’ll see dining, event, and activity options based on the area you’re looking at. Top trending lists like the Foodie List show you where the tastemakers are going, and help you find new restaurants based on information from local experts, Google’s algorithms, and trusted publishers like The Infatuation and others.

explore.gif

The redesigned Explore tab

We’ll even help you track your progress against each list, so if you’ve crossed four of the top restaurants in the Meatpacking District off your list, you’ll know that you have six more to try.

ExploreTab.png

Events and activities, options around you, and top lists

Your match

Tapping on any food or drink venue will display your “match”—a number that suggests how likely you are to enjoy a place and reasons explaining why. We use machine learning to generate this number, based on a few factors: what we know about a business, the food and drink preferences you’ve selected in Google Maps, places you’ve been to, and whether you’ve rated a restaurant or added it to a list. Your matches change as your own tastes and preferences evolve over time—it’s like your own expert sidekick, helping you quickly assess your options and confidently make a decision.

yourmatch.gif

Your match

Group planning made easy

When you need to corral a group for a meal or activity, there’s a new feature that makes it easier to coordinate. Long press on the places you’re interested in to add it to a shareable shortlist that your friends and family can add more places to and vote on. Once you’ve made a decision together, you can use Google Maps to book a reservation and find a ride.

Rally_anim_longpress_02.gif

Add places to a shortlist

Rally_anim_map_01.gif

Vote on where to go

Never miss a thing

The new “For you” tab is the best way to stay on top of the latest and greatest happening in the areas you’re into. You can choose to follow neighborhoods and dining spots you want to try so you’ll always have an idea for your next outing. Information about that new sandwich spot downtown, the surprise pop-up from your favorite chef, or that new bakery shaking up the pastry scene in Paris will now come straight to you.

ForYouTab_01.gif

The For you tab

You’ll start to see these features rolling out globally on Android and iOS in the coming months. Get ready to rediscover your world with Google Maps.

Eight Helpful Ways to Relieve Your Technostress

Great news for remote workers: technology is stressing us out. A 2017 study conducted by University of Gothenburg, Sweden proposed that cell phone, computer, and television use are directly linked to stress levels, quality of sleep, and even overall me…

Say Hello to Android Things 1.0

Posted by Dave Smith, Developer Advocate for IoT

Android Things is Google’s managed OS that enables you to build and maintain Internet of Things devices at scale. We provide a robust platform that does the heavy lifting with certified hardware, rich developer APIs, and secure managed software updates using Google’s back-end infrastructure, so you can focus on building your product.

After a developer preview with over 100,000 SDK downloads, we’re releasing Android Things 1.0 to developers today with long-term support for production devices. Developer feedback and engagement has been critical in our journey towards 1.0, and we are grateful to the over 10,000 developers who have provided us feedback through the issue tracker, at workshop events, and through our Google+ community.

Powerful production hardware

Today, we are announcing support for new System-on-Modules (SoMs) based on the NXP i.MX8M, Qualcomm SDA212, Qualcomm SDA624, and MediaTek MT8516 hardware platforms. These modules are certified for production use with guaranteed long-term support for three years, making it easier to bring prototypes to market. Development hardware and reference designs for these SoMs will be available in the coming months.



New SoMs from NXP, Qualcomm, and MediaTek

The Raspberry Pi 3 Model B and NXP i.MX7D devices will continue to be supported as developer hardware for you to prototype and test your product ideas. Support for the NXP i.MX6UL devices will not continue. See the updated supported platforms page for more details on the differences between production and prototype hardware.

Secure software updates

One of the core tenets of Android Things is powering devices that remain secure over time. Providing timely software updates over-the-air (OTA) is a fundamental part of that. Stability fixes and security patches are supported on production hardware platforms, and automatic updates are enabled for all devices by default. For each long-term support version, Google will offer free stability fixes and security patches for three years, with additional options for extended support. Even after the official support window ends, you will still be able to continue to push app updates to your devices. See the program policies for more details on software update support.

Use of the Android Things Console for software updates is limited to 100 active devices for non-commercial use. Developers who intend to ship a commercial product running Android Things must sign a distribution agreement with Google to remove the device limit. Review the updated terms in the Android Things SDK License Agreement and Console Terms of Service.

Hardware configuration

The Android Things Console includes a new interface to configure hardware peripherals, enabling build-time control of the Peripheral I/O connections available and device properties such as GPIO resistors and I2C bus speed. This feature will continue to be expanded in future releases to encompass more peripheral hardware configurations.

Production ready

Over the past several months, we’ve worked closely with partners to bring products built on Android Things to market. These include Smart Speakers from LG and iHome and Smart Displays from Lenovo, LG, and JBL, which showcase powerful capabilities like Google Assistant and Google Cast. These products are hitting shelves between now and the end of summer.

Startups and agencies are also using Android Things to prototype innovative ideas for a diverse set of use-cases. Here are some examples we are really excited about:

  • Byteflies: Docking station that securely transmits wearable health data to the cloud
  • Mirego: Network of large photo displays driven by public photo booths in downtown Montreal

If you’re building a new product powered by Android Things, we want to work with you too! We are introducing a special limited program to partner with the Android Things team for technical guidance and support building your product. Space is limited and we can’t accept everyone. If your company is interested in learning more, please let us know here.

Additional resources

Take a look at the full release notes for Android Things 1.0, and head over to the Android Things Console to begin validating your devices for production with the 1.0 system image. Visit the developer site to learn more about the platform and explore androidthings.withgoogle.com to get started with kits, sample code, and community projects. Finally, join Google’s IoT Developers Community on Google+ to let us know what you’re building with Android Things!

Advancing the future of society with AI and the intelligent edge

The world is a computer, filled with an incredible amount of data. By 2020, the average person will generate 1.5GB of data a day, a smart home 50GB and a smart city, a whopping 250 petabytes of data per day. This data presents an enormous opportunity for developers — giving them a seat of power,…

The post Advancing the future of society with AI and the intelligent edge appeared first on The Official Microsoft Blog.