Delve deeper into Android development with our new course!

Posted by Jocelyn Becker, Senior Program Manager, Google Developer Training

If you know the basics of building Android apps and want to delve deeper, take a
look at our new Advanced
Android Development
course built by the Google Developers Training team.

Do you want to learn how to use fragments, add widgets for your app, and fine
tune your app’s performance? Make your app available to a diverse user base
through localization and accessibility features? Use sensors in your app? How
about creating custom views, drawing directly to the screen and running
animations?

Each lesson in our new course takes you through building an app that illustrates
an advanced concept, from incorporating maps into your app to using a
SurfaceView to draw outside the main UI thread.

This course is intended for experienced Java programmers who already know the
fundamentals of building Android apps. It is a follow-on course to our Android
Developer Fundamentals
course. The course is intended to be taught as
instructor-led training. However, all the materials are published online and are
available to anyone who wants to learn more advanced concepts of Android
development.

We have published detailed written tutorials,
concept
guides
, slide decks, and most importantly, a treasure trove of apps in
GitHub
. You can find links to everything at developers.google.com/training/android-advanced.

Educational institutions worldwide are invited to use this course to teach your
students. Individual developers are welcome (and encouraged) to work through the
tutorials to learn on their own.

Each lesson presents a different, advanced topic, and you can teach or learn
each topic independently of the others.

Build apps as you learn how to use sensors, add places to your app, and draw
directly to a canvas. And much more!

The new course covers:

  • using fragments
  • building widgets
  • using sensors
  • measuring and improving application performance
  • localizing your app
  • making your app accessible
  • adding location, places and maps to your apps
  • creating custom views
  • drawing to the canvas
  • drawing to a SurfaceView off the main thread
  • running animations

Learn more at developers.google.com/training/android-advanced.

The Mythical UX Designer – Five Common Misconceptions

Exploring the truth about the type of people who do UX. Ah, the UX designer. A mythical figure in high demand these days. Sought after for their skills in empathizing with customers, designing digital products that people love, and their peculiar love of collaboration. Their natural habitat is anywhere there are interfaces to problem solve […]

Final preview of Android 8.1 now available

Posted by Dave Burke, VP of Engineering

Starting today we’re rolling out an update to the Android 8.1 developer preview,
the last before the official launch to consumers in December. Android 8.1 adds
targeted enhancements to the Oreo platform, including optimizations for
Android Go (for devices with 1GB or less of memory) and a
Neural Networks API to accelerate on-device machine
intelligence. We’ve also included a few smaller enhancements to Oreo in response
to user and developer feedback.

If you have a device enrolled in the Android Beta Program, you’ll receive the
update over the next few days. If you haven’t enrolled yet, just visit the Android Beta site to enroll and get the
update.

At the official release in December we’ll bring Android 8.1 to all supported
Pixel and Nexus devices worldwide — including Pixel 2 and Pixel 2
XL
, Pixel, Pixel XL, Pixel C, Nexus 5X, and Nexus 6P. Watch for
announcements soon.

What’s in this update?

This preview update includes near-final Android 8.1 system images for Pixel and
Nexus devices, with official APIs (API level 27), the latest optimizations and
bug fixes, and the November 2017 security patch updates. You can use the images
for compatibility testing or to develop using new Android 8.1 features like the
Neural
Networks API
and others.

The Neural Networks API provides accelerated computation and inference for
on-device machine learning frameworks like TensorFlow Lite — Google’s
cross-platform ML library for mobile — as well as Caffe2 and others. TensorFlow
Lite is now
available to developers
, so visit the TensorFlow
Lite open source repo
for downloads and docs. TensorFlow Lite works with the
Neural Networks API to run models like MobileNets,
Inception v3, and Smart
Reply
efficiently on your mobile device.

Also, for Pixel 2 users, the Android 8.1 update on these devices enables Pixel
Visual Core
— Google’s first custom-designed co-processor for image
processing and ML — through a new developer option. Once enabled, apps using
Android Camera API can capture HDR+ shots through Pixel Visual Core. See the release
notes
for details.

Get your apps ready

With the consumer launch coming in December, it’s
important to test your current app now. This ensures that users transition
seamlessly to Android 8.1 when it arrives on their devices.

Just enroll your eligible device in Android Beta to get the latest update,
then install your app from Google Play and test. If you don’t have a Pixel or
Nexus device, you can set up an Android 8.1 emulator for testing instead. If you
notice any issues, fix them and update your app in Google Play right away —
without changing the app’s platform targeting.

When you’re ready, take advantage of new features and APIs in Android 8.1. See
the developer
preview site
, the API 27 diff
report
, and the updated
API reference
for details.

Speed your development with Android Studio

To build with Android 8.1, we recommend updating to Android
Studio 3.0
, which is now available from the stable
channel
. On top of the new app performance
profiling tools
, support for the Kotlin
programming language
, and Gradle build optimizations, Android Studio 3.0
makes it easier to develop with Android Oreo features like Instant
Apps
, XML
Fonts
, downloadable
fonts
, and adaptive
icons
.

We also recommend updating to the Android
Support Library 27.0.0
, which is available from Google’s
Maven repository
. See the version
notes
for details on what’s new.

Publish your updates to Google Play

Google Play is open for apps compiled against or targeting API 27. When you’re
ready, you can publish your APK updates in your alpha, beta, or production
channels.

To make sure your app runs well on Android 8.1 as well as older versions, we
recommend using Google Play’s beta
testing feature
to run an alpha test on small group of users. Then run a
much open beta test on a much larger group of users. When you’re ready to launch
your update, you can use a staged
rollout
in your production channel. We’re looking forward to seeing your app
updates!

Give us your feedback

As always, your feedback is crucial, so please keep it coming!.
We’ve set up different hotlists where you can report Android
platform issues
, app
compatibility issues
, and third-party
SDKs and tools issues
. We also have a dedicated hotlist for Neural
Networks API issues
.

You can also give us feedback through the Android
Developer community
or Android Beta
community
as we work towards the consumer release in December.

Getting Our Community Help in Real Time

By Guy Rosen, VP of Product Management When someone is expressing thoughts of suicide, it’s important to get them help as quickly as possible. Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them. It’s part of […]

CategoriesUncategorized

Spreading holiday cheer with great deals on Google Play

As temperatures drop, stay warm and entertained with these hot holiday deals on Google Play. Starting today, you’ll be able to find your favorite movies, apps, games, music, TV and books at deep discounts. Just in time for the holidays, these deals for Black Friday and Cyber Monday run through November 27 in select markets.

Battle in your favorite games—not the crowds—on Black Friday.

Avoid store crowds and battle it out with a favorite game instead. Google Play offers discounts of up to 80 percent for premium games, including Call of Duty: Black Ops Zombies, LEGO Ninjago: Shadow of Ronin and LEGO® Jurassic World™  and more. You’ll also get special discounts, power ups and unlimited lives for the perennially popular Gardenscapes and Homescapes games on Google Play.

Set the mood with Google Play Music.

‘Tis the season to start playing songs of cheer. You can get a Google Play Music subscription free for four months, for the right songs to suit your mood anytime.

Survive the season with must-have apps.

When you need a last-minute recipe or a mental break from those holiday errands, Google Play has you covered with discounts on hundred of apps, including a 50 percent discount on a monthly subscription to Colorfy.

Take a turkey break with a movie or TV show.

Once the meal is done and the dishes are cleared, wind down with a favorite classic or a new release as Google Play offers 50 percent off any one movie to own and 25 percent off a TV season of your choice starting on November 23. You’ll also be able to rent any movie for 99 cents for one day only on November 25.

Whether it’s catching up on the latest episodes of “The Walking Dead” or “Outlander,” the latest Minion antics in “Despicable Me 3” or a young Peter Parker in “Spider-Man: Homecoming,” there’s something the entire family can enjoy.

Snuggle in with a good book.

The weather outside may be frightful, but a good book can be delightful. Whether it’s a bedtime story or the latest mystery, Google Play is offering a $5 credit towards any book over $5 and discounts on top titles starting on November 23. You can also find some of the most popular omnibus comics books, including Batman: The Complete Hush, Thor and Flashpoint, for $5 or less on November 25 only.

For more information on these and other deals throughout the season, head to Google Play’s Holiday Hub.

Tune in for the world’s first Google Translate music tour

Eleven years ago, Google Translate was created to break down language barriers. Since then, it has enabled billions of people and businesses all over the world to talk, connect and understand each other in new ways.

And we’re still re-imagining how it can be used—most recently, with music. The music industry in Sweden is one of the world’s most successful exporters of hit music in English—with artists such Abba, The Cardigans and Avicii originating from the country. But there are still many talented Swedish artists who may not get the recognition or success they deserve except for in a small country up in the north.

This sparked an idea: might it be possible to use Google Translate with the sole purpose of breaking a Swedish band internationally?

Today, we’re presenting Translate Tour, in which up and coming Swedish indie pop group Vita Bergen will be using Google Translate to perform their new single “Tänd Ljusen” in three different languages—English, Spanish and French—on the streets of three different European cities. In just a couple of days, the band will set off to London, Paris and Madrid to sing their locally adapted songs in front of the eyes of the public—with the aim of spreading Swedish music culture and inviting people all over the world to tune into the band’s cross-European indie pop music.

Translate Tour 2_Credit Anton Olin.jpg

William Hellström from Vita Bergen will be performing his song in English, Spanish and French.

Last year Google Translate switched from phrase-based translation to Google Neural Machine Translation, which means that the tool now translates whole sentences at a time, rather than just piece by piece. It uses this broader context to figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar.

Using this updated version of Google Translate, the English, Spanish and French translations of the song were close to flawless. The translations will also continue to improve, as the system learns from the more people using it.

Tune in to Vita Bergen’s release event, live streamed on YouTube today at 5:00 p.m. CEST, or listen to the songs in Swedish (“Tänd Ljusen”), English (“Light the Lights”), Spanish (“Enciende las Luces”) and French (“Allumez les Lumières”).

Learn more about the world around you with Google Lens and the Assistant

Looking at a landmark and not sure what it is? Interested in learning more about a movie as you stroll by the poster? With Google Lens and your Google Assistant, you now have a helpful sidekick to tell you more about what’s around you, right on your Pixel.

lens assistant

When we introduced the new Pixel 2 last month, we talked about how Google Lens builds on Google’s advancements in computer vision and machine learning. When you combine that with the Google Assistant, which is built on many of the same technologies, you can get quick help with what you see. That means that you can learn more about what’s in front of you—in real time—by selecting the Google Lens icon and tapping on what you’re interested in.

Here are the key ways your Assistant and Google Lens can help you today:

  • Text: Save information from business cards, follow URLs, call phone numbers and navigate to addresses.
  • Landmarks: Explore a new city like a pro with your Assistant to help you recognize landmarks and learn about their history.
  • Art, books and movies: Learn more about a movie, from the trailer to reviews, right from the poster. Look up a book to see the rating and a short synopsis. Become a museum guru by quickly looking up an artist’s info and more. You can even add events, like the movie release date or gallery opening, to your calendar right from Google Lens.
  • Barcodes: Quickly look up products by barcode, or scan QR codes, all with your Assistant.

Google Lens in the Assistant will be rolling out to all Pixel phones set to English in the U.S., U.K., Australia, Canada, India and Singapore over the coming weeks. Once you get the update, go to your Google Assistant on your phone and tap the Google Lens icon in the bottom right corner.

lens assistant image

We can’t wait to see how Google Lens helps you explore the world around you, with the help of your Google Assistant. And don’t forget, Google Lens is also available in Google Photos, so even after you take a picture, you can continue to explore and get more information about what’s in your photo. 

Moving Past GoogleApiClient

Posted by Sam Stern, Developer Programs Engineer

The release of version 11.6.0 of the Google Play services SDK moves a number of popular APIs to a new paradigm for accessing Google APIs on Android. We have reworked the APIs to reduce boilerplate, improve UX, and simplify authentication and authorization.

The primary change in this release is the introduction of new Task
and GoogleApi
based APIs to replace the GoogleApiClient access pattern.

The following APIs are newly updated to eliminate the use of
GoogleApiClient:

  • Auth – updated the Google Sign In and Credentials APIs.
  • Drive – updated the Drive and Drive Resource APIs.
  • Fitness – updated the Ble, Config, Goals, History,
    Recording, Sensors, and Sessions APIs.
  • Games – updated the Achievements, Events, Games, Games
    Metadata, Invitations, Leaderboards, Notifications, Player Stats, Players,
    Realtime Multiplayer, Snapshots, Turn Based Multiplayer, and Videos APIs.
  • Nearby – updated the Connections and Messages
    APIs.

These APIs join others that made the switch in previous releases, such as the
Awareness, Cast, Places, Location, and Wallet APIs.

The Past: Using GoogleApiClient

Here is a simple Activity that demonstrates how one would access the Google
Drive API using GoogleApiClient using a previous version of the
Play services SDK:

public class MyActivity extends AppCompatActivity implements
        GoogleApiClient.OnConnectionFailedListener,
        GoogleApiClient.ConnectionCallbacks {

    private static final int RC_SIGN_IN = 9001;

    private GoogleApiClient mGoogleApiClient;

    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        GoogleSignInOptions options =
               new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
                        .requestScopes(Drive.SCOPE_FILE)
                        .build();

        mGoogleApiClient = new GoogleApiClient.Builder(this)
                .enableAutoManage(this, this)
                .addConnectionCallbacks(this)
                .addApi(Auth.GOOGLE_SIGN_IN_API, options)
                .addApi(Drive.API)
                .build();
    }

    // ...
    // Not shown: code to handle sign in flow
    // ...

    @Override
    public void onConnectionFailed(@NonNull ConnectionResult connectionResult) {
        // GoogleApiClient connection failed, most API calls will not work...
    }

    @Override
    public void onConnected(@Nullable Bundle bundle) {
        // GoogleApiClient is connected, API calls should succeed...
    }

    @Override
    public void onConnectionSuspended(int i) {
        // ...
    }

    private void createDriveFile() {
        // If this method is called before "onConnected" then the app will crash,
        // so the developer has to manage multiple callbacks to make this simple
        // Drive API call.
        Drive.DriveApi.newDriveContents(mGoogleApiClient)
            .setResultCallback(new ResultCallback<DriveApi.DriveContentsResult>() {
                // ...
            });
    }
}

The code is dominated by the concept of a connection, despite using the
simplified “automanage” feature. A GoogleApiClient is only
connected when all APIs are available and the user has signed in (when APIs
require it).

This model has a number of pitfalls:

  • Any connection failure prevents use of any of the requested APIs, but using
    multiple GoogleApiClient objects is unwieldy.
  • The concept of a “connection” is inappropriately overloaded. Connection
    failures can be result from Google Play services being missing or from
    authentication issues.
  • The developer has to track the connection state, because making some calls
    before onConnected is called will result in a crash.
  • Making a simple API call can mean waiting for two callbacks. One to wait
    until the GoogleApiClient is connected and another for the API call
    itself.

The Future: Using GoogleApi

Over the years the need to replace GoogleApiClient became apparent,
so we set out to completely abstract the “connection” process and make it easier
to access individual Google APIs without boilerplate.

Rather than tacking multiple APIs onto a single API client, each API now has a
purpose-built client object class that extends GoogleApi. Unlike
with GoogleApiClient there is no performance cost to creating many
client objects. Each of these client objects abstracts the connection logic,
connections are automatically managed by the SDK in a way that maximizes both
speed and efficiency.

Authenticating with GoogleSignInClient

When using GoogleApiClient, authentication was part of the
“connection” flow. Now that you no longer need to manage connections, you
should use the new GoogleSignInClient class to initiate
authentication:

public class MyNewActivity extends AppCompatActivity {

    private static final int RC_SIGN_IN = 9001;

    private GoogleSignInClient mSignInClient;

    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        GoogleSignInOptions options =
               new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
                        .requestScopes(Drive.SCOPE_FILE)
                        .build();

        mSignInClient = GoogleSignIn.getClient(this, options);
    }

    private void signIn() {
        // Launches the sign in flow, the result is returned in onActivityResult
        Intent intent = mSignInClient.getSignInIntent();
        startActivityForResult(intent, RC_SIGN_IN);
    }

    @Override
    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);

        if (requestCode == RC_SIGN_IN) {
            Task<GoogleSignInAccount> task = 
                    GoogleSignIn.getSignedInAccountFromIntent(data);
            if (task.isSuccessful()) {
                // Sign in succeeded, proceed with account
                GoogleSignInAccount acct = task.getResult();
            } else {
                // Sign in failed, handle failure and update UI
                // ...
            }
        }
    }
}

Making Authenticated API Calls

Making API calls to authenticated APIs is now much simpler and does not require
waiting for multiple callbacks.

    private void createDriveFile() {
        // Get currently signed in account (or null)
        GoogleSignInAccount account = GoogleSignIn.getLastSignedInAccount(this);

        // Synchronously check for necessary permissions
        if (!GoogleSignIn.hasPermissions(account, Drive.SCOPE_FILE)) {
            // Note: this launches a sign-in flow, however the code to detect
            // the result of the sign-in flow and retry the API call is not
            // shown here.
            GoogleSignIn.requestPermissions(this, RC_DRIVE_PERMS, 
                    account, Drive.SCOPE_FILE);
            return;
        }

        DriveResourceClient client = Drive.getDriveResourceClient(this, account);
        client.createContents()
                .addOnCompleteListener(new OnCompleteListener<DriveContents>() {
                    @Override
                    public void onComplete(@NonNull Task<DriveContents> task) {
                        // ...
                    }
                });
    }

Before making the API call we add an inline check to make sure that we have
signed in and that the sign in process granted the scopes we require.

The call to createContents() is simple, but it’s actually taking
care of a lot of complex behavior. If the connection to Play services has not
yet been established, the call is queued until there is a connection. This is in
contrast to the old behavior where calls would fail or crash if made before
connecting.

In general, the new GoogleApi-based APIs have the following
benefits:

  • No connection logic, calls that require a connection are queued until a
    connection is available. Connections are pooled when appropriate and torn down
    when not in use, saving battery and preventing memory leaks.
  • Sign in is completely separated from APIs that consume
    GoogleSignInAccount which makes it easier to use authenticated APIs
    throughout your app.
  • Asynchronous API calls use the new Task API rather than
    PendingResult, which allows for easier management and
    chaining.

These new APIs will improve your development process and enable you to make
better apps.

Next Steps

Ready to get started with the new Google Play services SDK?

Happy building!

Making Visual Messaging Even Better – Introducing High Resolution Photos in Messenger

By Sean Kelly & Hagen Green, Product Managers, Messenger The way people message today is no longer limited by just text; visual messaging as our new universal language is much more emotional and expressive. Whether you’re catching up over moments big and small — like a recent vacation, an amazing meal at a new restaurant, […]

Iterative Design Done Right: Insights and Tips from Wealthsimple’s Design Director

Tom Creighton is a hands-on kind of design director. As head of UX for Wealthsimple, an online investment management service, he believes in the power of iterative design to build and release features on his company’s apps, across multiple devices. We asked why he’s such an advocate of the iterative design approach and asked him to share some of his tips for UX designers who want to work more iteratively in their own careers.

Two years of Google.org grants for racial justice

For many years, bold leaders across the U.S. have been using technology to foster a national dialogue on systemic inequity. Through painful moments like the Charleston church shooting, Googlers, like many others, asked what we could do to advance a more inclusive society. Two years ago, alongside our Black Googler Network and its allies, Google.org started a formal grant portfolio to advance racial and social justice in the United States.

In the spirit of understanding and getting closer to these complex issues, we began funding nonprofits fighting for racial justice in the California Bay Area—home to Google and many deep-rooted justice movements. In 2016, we doubled down on our commitment by supporting national organizations using data science and research to measure disparities in our system of mass incarceration. And today, we’re building on this commitment with another $7.5 million in grants to organizations advancing reform in our justice system, bringing our support to $32 million total.

Through these latest grants, we continue to support data and research demonstrating the impact of mass incarceration. Last month, we supported LatinoJustice with a $1 million grant to improve the quality of Latinx criminal justice data and shape the narrative and storytelling on the impact of mass incarceration in Latinx communities. And today we’re providing a $4 million grant to the Vera Institute of Justice to help them build an authoritative data set that will allow researchers to measure the true economic impact of incarceration rates in rural areas.

Vera Institute: In Our Backyards

Vera Institute: In Our Backyards

Many of our initial grantees are focused on data gathering, research and analysis. We’re now also investing in organizations working on systemic solutions. For example, we’re supporting the Leadership Conference Education Fund with a $2 million grant to bolster their effort to help more law enforcement jurisdictions work with community groups, who are a critical partner in policing. The Leadership Conference has a well-known track record in this area, and they will help establish best practices that lead to more constitutional policing, less crime, and more trust and accountability. Our $500,000 grant to the R Street Institute’s Justice for Work Coalition will support their efforts aimed to bring bipartisan support for criminal justice reform and to reduce barriers to employment following incarceration.

We’ll also continue to multiply the impact of our grants with skills-based volunteer support from Googlers. Just last month, 10 Google software engineers and data scientists volunteered with Google.org grantee the Center for Policing Equity (CPE) on a full-time basis for six weeks in New York. These 10 Googlers helped build and improve CPE’s National Justice Database, the nation’s first-ever database tracking national statistics on policing. They also built software, audited tools, and improved automation efforts to help CPE better process and analyze the reports they send to partner police departments.

A Googler working with the Center for Policing Equity rides along with an officer to understand community-informed policing
Googler Austin Swift, a lead on the CPE Impact Immersion, rides along with an officer to understand his efforts to implement community-informed policing.

This isn’t the only time we’ve teamed up Googler volunteers with grantees. Earlier this year, we helped the Equal Justice Initiative launch Lynching in America, an interactive site that explores this difficult time in U.S. history. More than 200 Googlers have volunteered in grantee Defy Ventures‘ prison and post-release programs for aspiring business owners, known as Entrepreneurs-in-Training. Working with Defy, Googlers have hosted small business training courses on digital marketing, digital skills and public speaking.

In the year ahead, Google will continue to stand in solidarity with the fight for racial justice. We believe in a justice system based on equity for all, informed by data and supported by community-based solutions. We’re proud to support organizations tackling this complex and worthy challenge.