Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

The High Five: what people are searching this week

Every week, we share a glimpse of what people are searching for on Google, with data from the Google News Lab. Here are a few of this week's top trends:  

Las Vegas

Many are still coming to terms with the tragic Las Vegas shooting that claimed the lives of 59 people and injured hundreds more. A few of the most-searched questions about the shooting were “What gun was used in the Las Vegas shooting?” “How long did the Las Vegas shooting last?” and “How many people died in the Las Vegas shooting?” After the shooting occurred, search interest in “gun control” went up more than 3000 percent, compared to the previous week.

Saying goodbye to a legend

Iconic guitarist Tom Petty passed away this week. When the news broke, people searched “Is Tom Petty really dead?” “How old was Tom Petty?” and “Why did Tom Petty die?” Meanwhile, search interest in “Tom Petty songs” reached an all-time high. On the day of his death, the artist’s most searched songs were “Free Fallin,’” “Wildflowers,” “American Girl,” “I Won’t Back Down,” and “Home.”

Give the people what they want

McDonald’s is bringing back its famed szechuan sauce—originally introduced for a limited time in 1998—after it was the subject of a “Rick and Morty” episode back in April. Fans of the show and the sauce are searching “Which McDonald’s have szechuan sauce?” “When does szechuan sauce come back?” and “Is szechuan sauce good?” Search interest in Rick and Morty’s “‘Szechuan sauce episode” dipped 780 percent lower than “Szechuan sauce locations.” Other top searched dipping sauces from McDonald’s include honey mustard, Sriracha Mac Sauce and spicy buffalo.

Not loving this one

“Love is not an ingredient.” This was a top search this week, and apparently the FDA agrees. They told a bakery in Massachusetts to remove “love” from its list of ingredients in a popular brand of bread for fear of “deceptive labeling.” Love may not be allowed on the ingredient label,  but other top trending ingredients this week were szechuan sauce (thanks, McDonald’s), mooncake ingredients, shepherd’s pie ingredients, and Hollandaise ingredients.

Sky aglow

This Thursday marked the first October Harvest Moon since 2009, and the next one is predicted to reappear in 2020. People searched to find out when the October Harvest Moon was happening, how to see it and “what planets are surrounding the October Harvest Moon?” The top regions searching for October Harvest Moon were Maine, Rhode Island and Oregon.

Driving Alignment Through an Open Creative Process with Dropbox

This post was submitted by Dropbox, a 2017 MAX partner. We’d like to thank all our 2017 MAX partners who help make the conference possible.

Today’s creative teams are made up of a fluid workforce: freelancers, vendors, agencies, and cross-functional in-house teams. We’re varied, multidisciplinary, and scattered across continents. And that makes it harder than ever to keep everyone on the same page. At Dropbox, we believe one of the best ways to keep teams in sync and bring ideas to life is through transparency.

In the Dropbox Brand Studio, our teams are made up of graphic designers, web designers, illustrators, producers, strategists, and creative writers. We define the visual identity system and voice for the brand. We produce creative for product launches and marketing campaigns. And we collaborate with product teams to name and add personality to the product.

Ultimately, we help keep people aligned by leading creative processes that unite work between many different teams—Marketing, Product, Communications, Sales—along with our network of agencies, vendors, and freelancers. We use creative strategy and production to build the bridges that connect these teams. These processes help us tell a meaningful story about our company.

The way we work isn’t working

Now that new technology lets us collaborate with people around the world, our teams have never been more distributed. We’re in different departments, working from different offices, across different time zones. This new way of working is especially challenging for those of us in Marketing and Design. We’re working at a breakneck pace, and churning out high volumes of content that needs to break through all of the noise and high filters of audiences today.

Everyone needs space to create their best work, yet we want our collaborators to get involved early on to make sure we’re creating the right thing. We want to show polished and refined work—but people want to be a part of the process. So how do we find a balance?

With so many projects going on at once, between many different departments and teams, we need to make sure that everyone is having the same conversation at the same time. And in the process, we need to build trusted relationships.

Embracing transparency throughout the creative process

As challenging as it can be, the best way to work collaboratively is to embrace transparency. Working transparently makes people more engaged and accountable. It shows people you’re willing to figure out problems with everyone on the team. And it removes ego by encouraging people to work together and share the responsibility of bringing a project to life.

We spend a lot of time thinking about this at Dropbox. Our mission is to simplify the way people work together. We started in 2007 with the idea that life would be a lot better if people could move their stuff into the cloud and access it from anywhere, on any device. Since then, we’ve made major progress. And we’ve discovered that for a lot of our users, sharing and collaborating on Dropbox was even more valuable than providing storage. So we’ve made a commitment to expand our focus from keeping files in sync to keeping teams in sync.

New collaboration tools that unleash your team’s creative energy

Our customers have given us tremendous insights about the challenges of teamwork. We’ve studied what hinders the creative process and examined what highly successful teams do well. And we use this insight not only to build new tools that help teams unleash their creative energy, but to improve how we work together.

It’s still a work in progress, but we’re committed as a company to address the underlying problems designers, writers, artists, and marketers face. To start, we’ve created a culture that embraces transparency and offers a safe place to create, without judgment. And we’ve developed new technologies like Dropbox Paper, that bring focus and flow to your work — facilitating team transparency and driving alignment.

At Adobe MAX, Dropbox’s own Collin Whitehead and Aaron Robbs will share from their experiences and explore these topics during the session, ‘Transparent Teams: Driving Alignment Through an Open Creative Process,’ on Wednesday, Oct. 18 at 3:30pm. Don’t miss out on this engaging discussion – register now before the session sells out.

 

Remembering Sputnik as US aims to advance computer science education

Sixty years ago this week, the Soviet Union launched Sputnik, the first satellite to orbit the Earth. The historic moment meant not only that the space race with the United States was on, but so were U.S. efforts to provide stronger math and science education to its young people, writes Brad Smith, Microsoft president, and Carol Ann Browne, Microsoft director of executive communications, in a new post in the “Today in Technology” series on LinkedIn.

Today, “Computer science is to our time what physics was to the middle of the 20th century,” Smith and Browne write. “It is reshaping every part of society, and no nation can prosper without providing its students with an opportunity to learn to code. This has sparked a new movement to bring computer science to schools, led by non-profit groups and companies across the tech sector.”

Decades later, leaders on both sides of the political aisle have taken steps to support better math and science education in the U.S. “As in the year following Sputnik, the biggest recent advances are attributable to two individuals from differing parts of the political spectrum,” Smith and Browne write.

The first was former President Barack Obama, who proposed federal funding to bring computer science into the nation’s schools. The second is Ivanka Trump, who as advisor to the president, consulted with non-profit and tech leaders and formulated a $1 billion, five-year plan to provide federal funding to advance computer science and other science and math subjects in the nation’s public schools.

“It’s easy to look at the events of 2017 and yearn for a Sputnik moment that can unite the nation,” they note. “But even in the absence of such an opportunity, it’s heartening to see leaders from both political parties recognize that, as in 1957, technology is on the move. The future of our children requires that education move forward with it.”

Read the full post on LinkedIn.

The post Remembering Sputnik as US aims to advance computer science education appeared first on The Official Microsoft Blog.

Caught in Limbo: Art and Life in an in Between World

Art reflects the joys and anxieties of the times, so this month we got to wondering what young artists have to tell us. We know they’re coming of age in a politically charged and economically uncertain world. And, like the generations before them, they want to skip the mistakes their parents and grandparents made and change the world. But in an era of fast-moving politics and even faster evolving technology, how will the emerging generation make their mark?

Living in limbo.

Just as millennials started coming of age the economy took a plunge, so their experiences as adults thus far are characterized — more than anything — by a sense of uncertainty. The New York Times called them “a generation in limbo,” waiting for the economy to re-stabilize. In England, the feeling of career and financial uncertainty became even more intense with the last year’s Brexit vote, when older generations overruled a strong millennial preference to stick with the EU.

While they wait for their career prospects to improve, a lot of young people are settling for jobs without much of a path ahead, living with their parents, and taking more time than past generations to attain financial stability. When The New York Times talked to young college grads about the situation, Amy Klein told them how her fellow Harvard classmates were managing. “They are thinking more in terms of creating their own kinds of life that interests them, rather than following a conventional idea of success and job security,” she explained.

For Amy, this meant joining a touring punk rock band. For others it means volunteering to find meaningful work, or exploring their artistic talents. We hope this is a silver lining — that millennials with the time and inclination to cultivate their creativity and find their voices will push the art world in unexpected, exciting new directions.

Reflecting a fractured zeitgeist.

Of course, finding your voice isn’t easy, and these are complicated times. Consider Eric Yahnkers, whose art draws on pop culture to ask deliberately uncomfortable questions about racism, sexism, and elitism. Eric recently talked to Vice about his work, and how hard it is for millennials and Gen X-ers to navigate their places in a politically-charged moment:

“My recent work centers on the current neo-progressive sociopolitical zeitgeist, and maybe more specifically a group of predominantly white, educated, middle-to-upper-middle-class millennials and gen x-ers caught in a clumsy limbo of wanting to join the battle for sweeping social reform and equality, while desperately trying to shed the stigma of their own perceived privilege and ancestral ties to cringe-worthy conduct. It’s an inner-negotiation that often leads to awkward bouts of overcompensation and inadvertent ignorance and discrimination,” he says.

In his art, Yahnkers puts familiar pop-culture icons in the context of current political debates. For example, his drawing “Purple Lives Matter” is an image of Prince, astride a motorcycle, wearing his purple velvet suit and familiar, mysterious gaze, but to the sides are police officers holding him at gunpoint.

“This piece was one that made me a bit uncomfortable,” Eric told Vice. “The piece obviously addresses the ‘Black Lives Matter’ versus ‘All Lives Matter’ paradigm, which has become a symbol or dog whistle to identify detractors to the cause, open and closeted bigots… Prince is the perfect hue of purple to firmly entrench the message in the confusing space between empowerment and ignorance.”

If it’s uncomfortable, turn it upside down.

Among the creative trends we’re watching is the young artists’ refusal to let the status quo go unquestioned. Take, for example, the new app Beme, which lets users capture and post short videos — but they can’t review or edit the videos before they go live. It’s part of a larger movement to deconstruct the hyper-curated world of social media. According to Beme’s creator Casey Neistat, “Truth is so much more interesting than the fiction we’re used to.”In a similar vein, Wanted Design recently created a pop-up art installation, DataCafé.biz to question our relationship to our personal data. Rather just accepting that corporations collect and sell information about us, Data Café highlights the transaction by parodying a blood donation. Users receive internet access and cookie in exchange for their data, along with a thought-provoking sticker that says, “I gave data today.”

A month in limbo.

Keep following us on the blog this month as we look at more of the ways young designers are expressing themselves, and a world, in limbo. We’ll ask them how they manage fast-changing creative tools, when they decide to try new trends, and when they decide to go their own way. And be sure to visit this month’s gallery of curated stock about being caught in limbo.

Banner Image by marioav

Hovering Art Director Social Sweepstakes

Share one of the following social posts with your best advice for dealing with creative feedback under pressure for the chance to win a Hovering Art Director talking action figure!

Find inspiration by visiting the It’s Nice That’s article on Advice for Receiving Feedback Under Pressure.

The deadline to participate is Tuesday, October 10, 2017 at 5:00pm PST. Winners will be selected at random and notified via social media. All shares must be public in order to be eligible. View complete official rules here: Adobe Stock HAD Social Sweepstakes Rules

Designing Government Services for Everyone: Erica Deahl on The Role UX Plays in Creating Better Services

Erica Deahl

When it comes to diversity of workplaces, Erica Deahl has experienced it all. From agencies, to presidential campaigns, to her current job as principal designer at Khan Academy, she believes in the power of good design to change people’s lives for the better. It was this drive that led her to become the lead UX designer on the U.S. Web Design Standards project, creating a library of design guidelines and code to help government developers and designers create trustworthy, accessible, and consistent digital government services.

At Adobe MAX, Erica will share insights into how UX design can revolutionize the way we interact with our governments in her talk, Designing Government Services for Everyone: A United UX for America. In her words, “In order to design a better immigration process, or to help teachers support students at different levels of learning, it’s critical to start by understanding the experience of the people relying on those products and services.” We asked her to share more of her story.

Why was it important to create the U.S. Web Design Standards?

In government, there are designers and developers in hundreds of agencies working to solve problems that are often very similar. Our team’s goal was to make it really easy for them to make good design choices. The U.S. Web Design Standards enable government teams to prototype and ship websites quickly, and they make it easier to share best practices for UX design and accessibility.

As teams across government have adopted the Standards, the sites they’ve shipped are accessible and use consistent UX patterns, which is a huge benefit for the people relying on those services.

What are the key UX design considerations when designing for government services?

In government, it’s mandatory for digital services to be accessible for everyone. Complying with accessibility guidelines is just a starting point–when you’re making design decisions, you have to constantly question whether those decisions will impair someone’s ability to use and understand the service. And you have to validate those choices by testing products with people in a wide range of accessibility contexts.

But accessibility isn’t the only constraint–there are sometimes legal or technical requirements that prevent you from choosing the clearest design direction, so you have to find workarounds that are both clear to users and legally compliant.

How important is consistency across government websites and apps?

People shouldn’t have to understand the complex organizational structure of government in order to benefit from the services it provides. Many government benefits require people to interact with numerous different agencies or departments, making the experience of seeking a benefit frustrating and disorienting. Establishing a consistent user experience throughout that journey makes it easier for people to understand and trust the process, and get to the outcomes they need faster.

Why is it important for you as a UX designer to work on government and public service-related projects?

There’s a huge amount of work we need to do to improve delivery of government digital services. Lots of agencies still rely on legacy systems that don’t work very well, and that means that the millions of people who rely on their services suffer. It also means that there’s a massive opportunity–even incremental design improvements make an enormous impact.

We need designers to help address those problems because designers are trained to learn about, understand, and empathize with the challenges faced within agencies and by the people they serve, and to design solutions.

Designers have an incredible opportunity to make a difference in government, and they don’t have to make a career sacrifice to do that work. Over the past few years, organizations like 18F and USDS have done some amazing work to build talented teams and enable designers to leverage their expertise on problems of a scale and complexity to rival the most exciting private sector opportunities.

To learn more about Erica Deahl and her work creating the U.S. Web Design Standards, check out the case study on her website or catch her talk at Adobe MAX.

Bavarian State Library and Google celebrate 10 years of partnership

Ten years ago the venerable Bavarian State Library from Munich (BSB) and the comparatively young Google started their joint adventure: the digitization of hundreds of thousands of historical writings from the archives of the BSB and its Bavarian regional libraries. To celebrate the 10th anniversary of our collaboration, we’ve published a digital exhibition on Google Arts & Culture.

The BSB looks back on almost 500 years of history. In 1558 it was founded by Duke Albrecht V. With more than 10 million volumes, 61,000 current journals and 130,000 manuscripts, the library is one of the most important knowledge centers in the world.

To preserve that heritage, BSB has been working with Google since 2007 to digitize over 1.9 million copyright-free titles—such as books, maps and magazines—from the 17th to the end of the 19th century. Thanks to this partnership, BSB is now the largest digital database of all German libraries. The project has long been expanded and now covers the holdings of the ten regional state libraries such as Regensburg, Passau or Augsburg.

Not only for us at Google this clearly is a milestone in digitization and the prototype of a public-private partnership. Klaus Ceynowa, Managing Director of BSB, adds: “Content in context is our mantra. Google has played a major role in helping us achieve it!”

BSB 5
Tausend und eine Nacht : arabische Erzählungen, One Thousand and One Nights: Arabic stories" (1872), Weil, Gustav
BSB 4
“Atlas Minor: Ein kurtze jedoch gründtliche Beschreibung der gantzen Welt und aller ihrer Theyl” (1631), Gerard Mercator
BSB 6
“The lion” - Illustration from “The small menagerie - drawings of the most extraordinary wild animals” (1854)
BSB 1
“The Zeitgeist and the people, a mirror of the sins of the world: An Octoberfest-Sermon” (1835)

Consumption models helping to enable business transformation

Co-Author: Larry Scherzer, Director, Financial Product Development Digital transformation is really about business transformation and is driving the need for new consumption models. Gartner reports that 47 percent of CEOs are being challenged by their board of directors to make progress in digital business.  The demands are becoming more diverse and complex as new business applications […]

From Phone Trials to Total Connection

Many of us feel constantly connected through our cell phones, but a missed call or text isn’t the end of the world. Except for lawyers, every phone call matters. A major California firm was going through some issues with their phones. Their provider’s solution couldn’t keep up with their workload. And they knew dropped calls […]

Creatives are Pressured to Deliver, Yet Constantly Interrupted

This post was submitted by Wrike, a 2017 MAX partner. We’d like to thank all our 2017 MAX partners who help make the conference possible.

When Kristina, a graphic designer in San Francisco, first logs in to her work computer for the day, she heads to her to-do list to check what needs to be done. Then she launches Adobe Photoshop or Illustrator to work on her vector illustrations or web design mock-ups. Thus begins her day-long marathon of switching between tools.

Throughout the day, as she finishes a design, she hits Alt+Tab to switch between her Adobe tool and her browser-based work management tool to ensure she’s completing her most important tasks first, or to check client feedback on newly submitted designs.

It’s a table tennis game — bouncing from one app to another to keep track of work. And it’s a huge distraction for Kristina and many others like her.

“I Could Produce More If There Were Fewer Distractions!”  

The research firm Oxford Economics surveyed 1,200 employees around the world to ask about work distractions. They found that employees rank the “capacity to focus on work without interruption” as their top priority, even above perks such as day care, free food, and the ubiquitous ping pong table.

Furthermore, according to the survey, managers felt their employees were well-equipped to deal with distractions at work. No surprise that less than half of the employees surveyed agreed with the managerial point-of-view.

Another study conducted by International Data Corporation (IDC) found that:

• 85% of creative professionals say they’re under pressure to develop assets and deliver campaigns faster
• 71% say they need to create 10x more assets than previously in order to support the diversity of channels
• 76% agree that personalization is driving the increased need for assets

This constant need for more deliverables isn’t going away anytime soon. It’s a reality everyone in the creative and marketing industries has to deal with. The big problem is how to keep producing such high volume while juggling so many tools and browser windows.

Integrating Tools: Think of Your Work Apps as a Team

With many online tools freely releasing public APIs, it has become much easier to build virtual bridges between formerly isolated software.

Services such as IFTTT and Zapier’s entire raison d’etre is to connect your apps so everything plays together nicely.

And software companies themselves are seeing the need to create integrations between their products and the work tools that their customers are already using.

Hence the various collaboration points built into Adobe Creative Cloud, where the ability to find a stock photo, for example, license it, and edit it, can now all be done without leaving the Creative Cloud environment.

Today, Kristina uses Wrike for Marketers, with the Wrike Adobe Creative Cloud Extension that allows designers like her to access Wrike tasks from within Creative Cloud. This means she doesn’t have to keep a browser tab open in order to check on her to-dos or the status of an approval. She can simply click a menu item and never have to leave the Adobe workspace.

Not having to navigate away from her Adobe environment means Kristina no longer has to interrupt her flow. It means fewer distractions and more chances to focus on producing quality work — the work she was hired to do.

Rethinking iOS Apps in the Enterprise

Enterprises are deploying more and more business critical applications on their networks, making it very important that these apps have higher priority for performance. With the recent partnership announcements between Cisco and Apple, application developers now have the power to enable “Quality of Service” (QoS) tags right from within their application.   As an app developer, […]

Working with Multiple JobServices

Posted by Isai Damier, Software Engineer, Android DA

Working with Multiple JobServices

In its continuous effort to improve user experience, the Android platform has introduced strict limitations on background services starting in API level 26. Basically, unless your app is running in the foreground, the system will stop all of your app's background services within minutes.

As a result of these restrictions on background services, JobScheduler jobs have become the de facto solution for performing background tasks. For people familiar with services, JobScheduler is generally straightforward to use: except in a few cases, one of which we shall explore presently.

Imagine you are building an Android TV app. Since channels are very important to TV Apps, your app should be able to perform at least five different background operations on channels: publish a channel, add programs to a channel, send logs about a channel to your remote server, update a channel's metadata, and delete a channel. Prior to Android 8.0 (Oreo) each of these five operations could be implemented within background services. Starting in API 26, however, you must be judicious in deciding which should be plain old background Services and which should be JobServices.

In the case of a TV app, of the five operations mentioned above, only channel publication can be a plain old background service. For some context, channel publication involves three steps: first the user clicks on a button to start the process; second the app starts a background operation to create and submit the publication; and third, the user gets a UI to confirm subscription. So as you can see, publishing channels requires user interactions and therefore a visible Activity. Hence, ChannelPublisherService could be an IntentService that handles the background portion. The reason you should not use a JobService here is because JobService will introduce a delay in execution, whereas user interaction usually requires immediate response from your app.

For the other four operations, however, you should use JobServices; that's because all of them may execute while your app is in the background. So respectively, you should have ChannelProgramsJobService, ChannelLoggerJobService, ChannelMetadataJobService, and ChannelDeletionJobService.

Avoiding JobId Collisions

Since all the four JobServices above deal with Channel objects, it should be convenient to use the channelId as the jobId for each one of them. But because of the way JobServices are designed in the Android Framework, you can't. The following is the official description of jobId

Application-provided id for this job. Subsequent calls to cancel, 
or jobs created with the same jobId, will update the pre-existing 
job with the same id. This ID must be unique across all clients 
of the same uid (not just the same package). You will want to 
make sure this is a stable id across app updates, so probably not 
based on a resource ID.

What the description is telling you is that even though you are using 4 different Java objects (i.e. -JobServices), you still cannot use the same channelId as their jobIds. You don't get credit for class-level namespace.

This indeed is a real problem. You need a stable and scalable way to relate a channelId to its set of jobIds. The last thing you want is to have different channels overwriting each other's operations because of jobId collisions. Were jobId of type String instead of Integer, the solution would be easy: jobId= "ChannelPrograms" + channelId for ChannelProgramsJobService, jobId= "ChannelLogs" + channelId for ChannelLoggerJobService, etc. But since jobId is an Integer and not a String, you have to devise a clever system for generating reusable jobIds for your jobs. And for that, you can use something like the following JobIdManager.

JobIdManager is a class that you tweak according to your app's needs. For this present TV app, the basic idea is to use a single channelId over all jobs dealing with Channels. To expedite clarification: let's first look at the code for this sample JobIdManager class, and then we'll discuss.

public class JobIdManager {

   public static final int JOB_TYPE_CHANNEL_PROGRAMS = 1;
   public static final int JOB_TYPE_CHANNEL_METADATA = 2;
   public static final int JOB_TYPE_CHANNEL_DELETION = 3;
   public static final int JOB_TYPE_CHANNEL_LOGGER = 4;

   public static final int JOB_TYPE_USER_PREFS = 11;
   public static final int JOB_TYPE_USER_BEHAVIOR = 21;

   @IntDef(value = {
           JOB_TYPE_CHANNEL_PROGRAMS,
           JOB_TYPE_CHANNEL_METADATA,
           JOB_TYPE_CHANNEL_DELETION,
           JOB_TYPE_CHANNEL_LOGGER,
           JOB_TYPE_USER_PREFS,
           JOB_TYPE_USER_BEHAVIOR
   })
   @Retention(RetentionPolicy.SOURCE)
   public @interface JobType {
   }

   //16-1 for short. Adjust per your needs
   private static final int JOB_TYPE_SHIFTS = 15;

   public static int getJobId(@JobType int jobType, int objectId) {
       if ( 0 < objectId && objectId < (1<< JOB_TYPE_SHIFTS) ) {
           return (jobType << JOB_TYPE_SHIFTS) + objectId;
       } else {
           String err = String.format("objectId %s must be between %s and %s",
                   objectId,0,(1<<JOB_TYPE_SHIFTS));
           throw new IllegalArgumentException(err);
       }
   }
}

As you can see, JobIdManager simply combines a prefix with a channelId to get a jobId. This elegant simplicity, however, is just the tip of the iceberg. Let's consider the assumptions and caveats beneath.

First insight: you must be able to coerce channelId into a Short, so that when you combine channelId with a prefix you still end up with a valid Java Integer. Now of course, strictly speaking, it does not have to be a Short. As long as your prefix and channelId combine into a non-overflowing Integer, it will work. But margin is essential to sound engineering. So unless you truly have no choice, go with a Short coercion. One way you can do this in practice, for objects with large IDs on your remote server, is to define a key in your local database or content provider and use that key to generate your jobIds.

Second insight: your entire app ought to have only one JobIdManager class. That class should generate jobIds for all your app's jobs: whether those jobs have to do with Channels, Users, or Cats and Dogs. The sample JobIdManager class points this out: not all JOB_TYPEs have to do with Channel operations. One job type has to do with user prefs and one with user behavior. The JobIdManager accounts for them all by assigning a different prefix to each job type.

Third insight: for each -JobService in your app, you must have a unique and final JOB_TYPE_ prefix. Again, this must be an exhaustive one-to-one relationship.

Using JobIdManager

The following code snippet from ChannelProgramsJobService demonstrates how to use a JobIdManager in your project. Whenever you need to schedule a new job, you generate the jobId using JobIdManager.getJobId(...).

import android.app.job.JobInfo;
import android.app.job.JobParameters;
import android.app.job.JobService;
import android.content.ComponentName;
import android.content.Context;
import android.os.PersistableBundle;

public class ChannelProgramsJobService extends JobService {
  
   private static final String CHANNEL_ID = "channelId";
   . . .

   public static void schedulePeriodicJob(Context context,
                                      final int channelId,
                                      String channelName,
                                      long intervalMillis,
                                      long flexMillis)
{
   JobInfo.Builder builder = scheduleJob(context, channelId);
   builder.setPeriodic(intervalMillis, flexMillis);

   JobScheduler scheduler = 
            (JobScheduler) context.getSystemService(Context.JOB_SCHEDULER_SERVICE);
   if (JobScheduler.RESULT_SUCCESS != scheduler.schedule(builder.build())) {
       //todo what? log to server as analytics maybe?
       Log.d(TAG, "could not schedule program updates for channel " + channelName);
   }
}

private static JobInfo.Builder scheduleJob(Context context,final int channelId){
   ComponentName componentName =
           new ComponentName(context, ChannelProgramsJobService.class);
   final int jobId = JobIdManager
             .getJobId(JobIdManager.JOB_TYPE_CHANNEL_PROGRAMS, channelId);
   PersistableBundle bundle = new PersistableBundle();
   bundle.putInt(CHANNEL_ID, channelId);
   JobInfo.Builder builder = new JobInfo.Builder(jobId, componentName);
   builder.setPersisted(true);
   builder.setExtras(bundle);
   builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_ANY);
   return builder;
}

   ...
}

Footnote: Thanks to Christopher Tate and Trevor Johns for their invaluable feedback

Can’t Make it to Adobe MAX in Person? Watch it Live Online.

We’ll be live streaming the Keynotes for both days, so if you can’t make it to MAX in person, you can still see the latest releases and updates for Creative Cloud, and hear the inspiring stories from our creative speakers.

  • Wednesday, Oct. 18, 9am, PDT
    • Keep your finger on the pulse of the n ewest innovations as we reveal how you can work smarter and faster, all while taking your creative skills in new directions.
  • Thursday, Oct. 19, 10am, PDT
    • Hear their stories firsthand. Our day two keynote speakers will discuss their passions, process and creative journeys:
      • Annie Griffiths, photojournalist
      • Jon Favreau, actor/director
      •  Jonathan Adler, potter/designer
      • Mark Ronson, musician

Register to watch online here, and don’t miss a minute of MAX!

We’re kicking off a new season—for E-rate funding

I’ve long believed that the ancient Romans had it wrong when they assigned January 1 as the start of a new year. For me, the “new year” really begins right about now, in the waning days of a New England summer and at the start of a colorful fall.

Why All UX Designers Should Be Creating User Journeys, And Here’s How To Make One

Good design is all about the user. If designers truly want to create the best products, it’s important for them to see the product from the user’s perspective. That’s where a tool called a user journey comes in. It’s a powerful combination of storytelling and visualization that helps designers identify opportunities to create new and improved experiences for their users. In this article, I’ll introduce a concept of user journey along with some tips and specific examples.

What Is A User Journey?

A user journey is a visualization of the process that a person goes through in order to accomplish a goal. Typically, it’s presented as a series of steps in which a person interacts with a product. As opposed to the customer journey, which analyzes the steps before and after using the product, user journey only examines what happens inside the app/website. In context of e-commerce website, for example. user journey can consist of a number of pages and decision points that carry the user from one step to another in attempt to purchase a product.

Internet banking.

What’s Required to Create A User Journey?

The following elements are required to create a user journey:

  • Persona: User journeys are tied back to personas. To create a realistic user journey, it is important to first identify the users and create personas for them. When creating a user journey, it’s recommended to use one persona per journey in order to provide a strong, clear narrative.
  • Goal and Scenario: The exact goal to which the given journey belongs. The scenario presents a situation in which the persona tries to accomplish something. User journey is best for scenarios that describe a sequence of events, like purchasing something.
  • Context: A context is defined by a set of facts that surround a scenario, like the physical environment in which the experience is taking place. Where is the user? What is around them? Are there any other factors which may distract them?

What Does A User Journey Look Like?

A user journey can take a wide variety of forms depending on the context and your business goals. In its most basic form, a user journey is presented as a series of user steps and actions following a timeline skeleton. This kind of layout makes it easier for all team members to understand and follow the narrative.

A simple user journey only reflects one possible path during one scenario:

A simple user journey has one user, one goal, one scenario and one path even when a product/service allows multiple path variations. Image credits: uxstudioteam

A complex user journey can encompass experiences occurring during different times and scenarios:

Complex user journey reflects different users paths on the same flow. Image credits: Nform

While user journey maps can (and should) take a wide variety of forms, certain elements are generally included:

  • A title summarizing the journey (e.g. ‘Purchasing an electronic device in the e-commerce store’)
  • A picture of the persona the journey relates to.
  • A series of steps. Everything real-world users would do as a separate activity counts as a step. Steps should provide a sense of progression (each step should enable the persona to get to the next one).
  • An illustration of what’s happening in the step. This illustration includes touchpoints (times when a persona in the journey actually interacts with a product) and channels (methods of communication, such as the website or mobile app). For example, for the touchpoint ‘pay for product,’ the channels associated with this touchpoint could be ‘pay online’ or ‘pay in person.’
  • The persona’s emotional state at each step. A user journey is the most important tool for designing emotions; at the heart of a user journey is what the user is doing, thinking, and feeling during each step. Are users engaged, frustrated, or confused? Emotional experiences can be supplemented with quotes from your research.

How Does A User Journey Fit Into The UX Design Process?

User journeys are typically created at the beginning of a project — during the product analysis phase, after personas are defined. Along with personas they can be one of the key design deliverables from this phase.

A user journey can be used to demonstrate either current or future user behavior:

  • When a user journey is used to show the current user behavior (the way users currently interact with the product) it should provide a clear view of how easy or difficult it is for a typical user to reach their goal.
  • When a user journey demonstrates the future state of the product (a ‘to-be’ experience), it should highlight any changes to pain points that a future solution will solve.

Why Should Designers Use a User Journey?

A user journey is used for understanding and addressing user needs and pain points. The entire point of the user journey is to understand user behavior, uncover gaps in the user experience, and then take action to optimize the experience.

There are many other benefits for designers when they invest time in user journeys. Properly-created user journeys can help designers better:

  • Communicate design decisions to stakeholders–As a document, a user journey can be used to clearly explain the strengths and weaknesses of the product in terms of UX.
  • Prioritize features–User journeys helps identify possible functionality at a high level. By understanding the key user’s tasks, it’s possible to define functional requirements that will help enable those tasks. This helps product teams scope out pieces of functionality in more detail and speed up the planning of a new version of the product.

On a company level, user journeys can:

  • Shift a company’s view–Since user journeys are shorthand for the overall user experience, it’s possible to leverage them as a supporting component of an experience strategy. Creating a user journey could be the first step in building a solid plan of action to invest in UX and create one shared organization-wide vision.
  • Promote collaboration between different departments–Because a user journey creates a vision of the entire user journey, it becomes a tool for creating cross-departmental conversation and collaboration. User journeys can engage stakeholders from across departments and spur collaborative conversation.

8 Tips for Creating and Using A User Journey

Before Creating A User Journey

1. A User Journey Should Have A Business Goal behind It

Each user journey should always be created to support a known business goal. A user journey that doesn’t align with a business goal won’t result in applicable insight. That’s why identification of the business goal that the user journey will support should be the first step in the process.

2. A User Journey Should Be Based on User Research

The effectiveness and importance of a user journey depends heavily on the quality of insights it provides. User journeys should be built from both qualitative and quantitative findings. The process of creating a user journey has to begin with getting to know users. If designers don’t have enough information to create a good user journey, they should conduct additional journey-based research (such as ethnographic research) to gain insights into the user experience.

When Creating A User Journey

3. Don’t Jump Straight to Visualization

The temptation to create an aesthetic graphic can lead to beautiful yet flawed user journeys. It’s recommended to start with sticky notes on a wall or visualize the path with a simple spreadsheet. It’s important to experiment and not accept the first idea as the best.

4. Don’t Make It Too Complex

While designing user journey it’s easy to get caught up in the multiple routes a user might take. Unfortunately, this often leads to a busy user journey. It’s recommended to start with a simple, linear journey (an ideal way to get the users to the given goal). Also, it’s better to avoid focusing too hard on a series of pages users go through. Instead, review what the users usually do and in what order.

5. More Ideas Lead to Better Design

It’s essential to involve all team members in the process of creating a user journey. The activity of creating a user journey (not the output itself) is the most valuable part of the process, and it’s helpful to have stakeholder participants from many areas of the organization involved in this activity. Mixing people who otherwise never communicate with each other can be extremely valuable, especially in large organizations.

Organize a collaborative workshop or brainstorming session, catch everyone up on the goals of the user journey and guide them through the process of creating the first draft. Image credits: UX Maze

Use Your User Journey

6. Assign Ownership

All too often, areas of negative friction in user journeys exist simply because no internal team or person is responsible for this area. Without ownership, no one has the responsibility or empowerment to change anything. That’s why it’s important to assign ownership for different parts of the journey map (e.g. key touchpoints) to internal departments or directly to responsible individuals.

7. Socialize Stakeholders

Getting stakeholders comfortable with user journeys is critical in moving your organization toward action. Reference your user journey during meetings and conversations to promote a narrative that others believe in and begin to use on a regular basis.

8. Maintain Journeys Over Time

Set a time each quarter or year to evaluate how your current user experience matches your documented user journeys. Consider when you may need to update the journey (such as after a major product release when the behavior of a user may change).

Conclusion

User journeys create a holistic view of user experience and this makes them an essential component in the process of designing a new product or improving the design of an existing one. By leveraging user journeys as a supporting component of an experience strategy it’s possible to keep users at the heart of all design decisions.

Digital Transformation and “Cloud Ready”

We hear a lot about digital transformation, hybrid cloud, and “lift and shift (to cloud)” from various marketing sources. Let’s examine that. About two years ago I blogged and presented “Is Your Network Cloud Ready?” (See also the presentation with more basics in it.) See also some of my more recent blogs at netcraftsmen.com. This […]

Will Robots Take Over the World and Destroy All Our Jobs?

Welcome to the fight of the century. In one corner there are the prophets of doom, who say that no job is safe from automation, and economic chaos is inevitable. And in the other corner, we have the rosy optimists, who believe technology will usher in a new era of meaningful work, more leisure time, […]

Introducing Portfolio’s New Integration With Adobe Lightroom

Whether you’re a weekend adventurer or working the red carpet, Adobe Lightroom is a critical tool in every photographer’s kit. Designed with creatives like you in mind, Adobe Portfolio makes showcasing your work effortless. And it just got even better. Now with the Lightroom integration on Adobe Portfolio, you can easily import your Collections and publish your best shots on your customized website in just a few clicks.

Website Pages & Integrations

When you head over to Manage Content on Adobe Portfolio, you’ll notice that the section has been broken into two tabs: Website Pages and Integrations. Website Pages show all of the Galleries and Pages currently created on your Portfolio. Integrations allow you to connect to your Adobe Lightroom Collections and set the gallery where future Behance projects will appear.

Adobe Lightroom Collections on Adobe Portfolio

Portfolio’s new Integration allows you to select any of the Lightroom Collections you’ve created and import the images to a Page on Adobe Portfolio. The entire Collection will be transformed into a Photo Grid within a new Page. You can edit the new Photo Grid to reorder or delete an image. As with every Page, you can add additional text, images, videos, or embedded content.

Behance Projects on Adobe Portfolio

Importing Projects from Behance has never been easier. If you have a Behance account associated with your Adobe ID, you’ll see a new option to set a default import Gallery. Going forward, whenever you create a new Project on Behance, Portfolio will automatically import it as a Page in the gallery you selected.

Integration Badges

To keep track of all of your Pages and their sources, we’ve also added product badges to the Manage Content section. Whenever you import from Adobe Lightroom or Behance, you’ll see a corresponding badge below the Page’s title. This is especially helpful when you want to re-import content you may have updated on Behance or Adobe Lightroom: simply click the gear icon next to the Page title and select the action you’d like to take.

Portfolio continues to make building your own customized creative website easier by leveraging one of the creative world’s most popular applications.

Learn more about our powerful photography-friendly features over at myportfolio.com/photography.

News Feed FYI: New Test to Provide Context About Articles

By Andrew Anker, Sara Su, and Jeff Smith

Today we are starting a new test to give people additional context on the articles they see in News Feed. This new feature is designed to provide people some of the tools they need to make an informed decision about which stories to read, share, and trust. It reflects feedback from our community, including many publishers who collaborated on its development as part of our work through the Facebook Journalism Project.

For links to articles shared in News Feed, we are testing a button that people can tap to easily access additional information without needing to go elsewhere. The additional contextual information is pulled from across Facebook and other sources, such as information from the publisher’s Wikipedia entry, a button to follow their Page, trending articles or related articles about the topic, and information about how the article is being shared by people on Facebook. In some cases, if that information is unavailable, we will let people know, which can also be helpful context.

Helping people access this important contextual information can help them evaluate if articles are from a publisher they trust, and if the story itself is credible. This is just the beginning of the test. We’ll continue to listen to people’s feedback and work with publishers to provide people easy access to the contextual information that helps people decide which stories to read, share, and trust, and to improve the experiences people have on Facebook.

How will this impact my page?

We anticipate that most Pages won’t see any significant changes to their distribution in News Feed as a result of this test. As always, Pages should refer to our publishing best practices and continue to post stories that are relevant to their audiences and that their readers find informative.

Get the latest on building the innovative apps of the future right now – at Connect(); 2017

The inspiration for our Connect(); event has always been about developers and the innovative applications they create. I am excited to announce our popular and highly anticipated Connect(); 2017 returns Nov. 15-17.

From the earliest days of computing, developers have shaped the future and changed the world. We’re at a critical inflection point where cloud, data and artificial intelligence (AI) are changing how humans interact with technology. Connect(); 2017 will show how we’re empowering developers to lead this new digital revolution by creating apps that will have a profound impact on the world.

Executive Vice President Scott Guthrie, alongside leading industry innovators, will share what’s next for developers across a broad range of Microsoft and open source technologies. In addition to live streaming the keynote from New York City, we have over 75 engineering-led, on-demand sessions and live hands-on training planned for this year’s event.

Whether you are creating cloud native-applications, targeting the edge of devices and Internet of Things, infusing your apps with AI, or just getting started, Connect(); 2017 will equip you with the tools and skills you need to build the apps of the future.  

I hope you will tune in and join us in November for what promises to be our best Connect(); yet!

Mitra

Mitra Azizirad is Corporate VP of Cloud Application Development, Data and AI Marketing, leading product marketing for developer and data platform offerings within Microsoft’s Cloud+Enterprise group. Mitra is also responsible for Microsoft’s strategy to democratize AI and make it accessible to every developer, and leads product marketing for AI-related products and services.

The post Get the latest on building the innovative apps of the future right now – at Connect(); 2017 appeared first on The Official Microsoft Blog.

I. Am. A. Woman. In. Tech.

"You are a woman in tech. You are part of the 11% of women in cybersecurity. You make an impact. You're not alone - you have the #wearecisco fam behind you. Don't ever forget that." - Tammy Nguyen shares her story of realizing she was, indeed, a woman in tech.

Exploring contemporary art with Google Arts & Culture

Working with more than 180 partners all over the world, Google Arts & Culture is shining a light on contemporary art, with a new collection of online stories and rich digital content at g.co/ContemporaryArt.

Contemporary Art_GAC.png
GAC_Contemporary Art

Through an immersive digital journey, we bring you straight to the institutions housing the world’s seminal contemporary art collections with the help of high quality visuals, gigapixel resolution images—which allow you to zoom into the tiny details of a piece of art, and panoramic Museum View imagery. You can hear amazing stories about art from curators, artists, and experts from institutions all over the world.

With a repository of online exhibits and editorial features, we answer common questions about the contemporary art world, introduce you to the world’s leading contemporary artists and icons, and perhaps most importantly, the issues that are shaping art today.

Contemporary Art_Big Questions.png

Here are some of our favorites:

iconicartistvideogif copy.gif

Explore more stories and immersive digital content on contemporary art from over 180 partners around the world with the Google Arts and Culture app on Android and iOS.

Screenshot 2017-10-04 at 22.42.53.png

Look, Ma, no SIM card!

While phones have come a long way over the years, there’s one thing that hasn't changed: you still need to insert a SIM card to get mobile service. But on the new Pixel 2, Project Fi is making it easier than ever to get connected. It’s the first phone built with eSIM, an embedded SIM that lets you instantly connect to a carrier network with the tap of a button. This means you no longer need to go to a store to get a SIM card for wireless service, wait a few days for your card to arrive in the mail, or fumble around with a bent paper clip to coax your SIM card into a tiny slot. Getting wireless service with eSIM is as quick as connecting your phone to Wi-Fi.  


You’ll see the option to use eSIM to connect to the Project Fi network on all Pixel 2s purchased through the Google Store or Project Fi. If you’re already a Project Fi subscriber, simply power up your Pixel 2 to begin setup. When you’re prompted to insert a SIM card, just tap the button for SIM-free setup, and we’ll take care of the heavy lifting.


For now, we’re piloting eSIM on the newest Pixel devices with Project Fi. We look forward to sharing what we learn and working together with industry partners to encourage more widespread adoption.


While we can talk about eSIM all we want, nothing beats trying it for the first time. If you’d like to give it a go, head over to the Project Fi website to sign up and purchase the Pixel 2.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

A new angle on your favorite moments with Google Clips

We love photos and videos. They take us back to a special time with our friends and family. Some of our favorites are genuine shots that capture the essence of the moment.


The trouble is, getting those spontaneous shots means that someone has to be the “designated photographer”—always waiting to snap a photo at just the right moment. I would have loved more images of me holding my kids, Clark and Juliet, when they were newborns, but because my wife and I had our hands full, these moments got away from us.


At Google, we’ve been working on a new type of camera that lets you capture more of these special moments, while allowing yourself also to be in the moment.


Today we’re introducing Google Clips, a lightweight, hands-free camera that helps you capture more genuine and spontaneous moments of the people—and pets!—who matter to you. You can set the camera down on the coffee table when the kids are goofing around or clip it to a chair to get a shot of your cat playing with its favorite toy. There’s also a shutter button—both on the camera and in the corresponding app—so you can capture other moments or subjects, whatever you please.

Clips2
Google Clips is small, weighs almost nothing, and comes with a clip to hold it steady.

We’ve put machine learning capabilities directly into Clips so when you turn it on, the camera looks for good moments to capture. Clips looks for stable, clear shots of people you know. You can help the camera learn who is important to you so when grandma comes in town, you’ll capture the grand entrance.

Clips3
The camera shoots short motion photos that last several seconds. As you probably guessed, we call these “clips.”

Your clips sync wirelessly and in seconds from the camera to the Google Clips app for Android or iOS. Simply swipe to save or delete your clips, or choose an individual frame to save as a high-resolution still photo. You can view and organize anything you’ve saved in Google Photos (or your favorite gallery app). And if you’re using Google Photos, you can backup unlimited clips for free.

Clips4

We know privacy and control really matter, so we’ve been thoughtful about this for Clips users, their families, and friends. Clips was designed and engineered with these principles in mind.

  • It looks like a camera, and lights up when it's on so everyone knows what Clips does and when it’s capturing.
  • It works best when used at home with family and close friends. As you capture with Clips, the camera learns to recognize the faces of people that matter to you and helps you capture more moments of them.
  • Finally, all the machine learning happens on the device itself. And just like any point-and-shoot, nothing leaves your device until you decide to save it and share it.  

Google Clips is coming soon to the U.S. for $249. In this first edition, Clips is designed specifically with parents and pet owners in mind. It works best with Pixel, and also works with Samsung S7/8 and on iPhone (6 and up).

We hope Google Clips helps you capture more spontaneous moments in life, without any of the hassle.

Clips5
One of my favorite clips I’ve captured with my family.

The Google Assistant, powering our new family of hardware

Today we introduced Google Home Mini and Google Home Max, a new Pixel phone, a new Pixelbook and Pixelbook Pen, and Pixel Buds. Something all of these products have in common is the Google Assistant. With new Assistant features throughout the entire line-up, they’re built with the Assistant in mind, ready to help you get more done.

But let’s take a step back. Exactly one year ago today, we first introduced the Google Assistant, which lets you have a natural conversation with Google. We said the Assistant should be helpful, simple to use, available where you need it and that it should understand your context—location, device you’re using, etc. And that’s exactly what we’ve been working toward. So before diving into what’s new today, let’s take a look at some of our highlights from the past year:

  • Hardware that works with your Assistant—Android phones, iPhones, headphones, voice-activated speakers like Google Home and others from several manufacturers, Android Wear and Android TV.
  • Your Assistant in more languages and places—Google Home in the U.K., Canada (English and French), Australia, Germany, France and, today, Japan. The Assistant on eligible Android phones and iPhones is also available in Brazilian Portuguese, Japanese, Korean and, coming soon, Italian, Spanish (in Mexico and Spain) and Singaporean English.
  • Smart home devices and platforms that work with your Assistant—you can now control over 1,000 smart home products from more than 100 brands, including August Home, Logitech Harmony, Nest, Philips Hue, SmartThings and Wemo.
  • Features to make your Assistant better—we’ve introduced Hands-Free Calling, reminders, shopping, shortcuts, step-by-step instructions to millions of recipes, and more. And of course Voice Match, which enables different household members to get personalized help on a shared device. So when you ask a question, the Assistant can recognize it’s your voice and respond with your news preferences, calendar, commute, and reminders. Starting today, Voice Match will be available in every country where Google Home is available (U.S., U.K., Australia, Canada, France, Germany and Japan).

We’ve come a long way in the past year, but we’re even more excited about what’s still in store, starting with what we’re announcing today. Here’s a look at what’s coming over the next few months:

Choose a new voice: The Assistant now has two voice options, starting in the U.S., so you can choose a voice that’s right for you. Try it today by going to settings in your Google Home app or Google Assistant on your phone and navigating to preferences.

Spend time with family: The Assistant will soon have more than 50 new ways for families to have fun, and with support for kids’ accounts managed with Family Link already on Android phones and coming to Google Home, you can have fun whether you’re on the go or at home. Soon, you’ll be able to say "Ok Google, let's play a game" and go on an adventure with Mickey Mouse, identify your alter ego with Justice League D.C. Super Hero, or play Freeze Dance in your living room. You can learn by saying "Let's learn" and then quiz yourself with games like "Talk Like a Chef" or "Play Space Trivia." When it's time for bed, try saying "Ok Google, tell me a story" to hear classics like Snow White and original stories like “The Chef Who Loved Potatoes.”

Manage your routines: Your Assistant will soon be able to help you manage your daily routines across your devices. So, once you’ve set up your preferences, when you say “Ok Google, let’s go home” your Assistant can update you about your commute, text your partner that you’re on your way and play your podcast where you left off. And when you get home, just say “Ok Google, I’m home,” and it will turn on the lights, adjust to your desired temperature and share your reminders.

Transactions: Over the next week, you’ll also be able make fast and easy purchases with your Assistant, starting with 1-800-Flowers, Applebee’s, Panera and Ticketmaster. So you’ll be able to say, “Ok Google, talk to Ticketmaster” to your Assistant on your phone to find and buy your tickets.

Broadcast: With the new broadcast feature, your Assistant can round up the family and announce to Google Homes around the house that it’s dinner time. Just say, “Ok Google, broadcast: come on upstairs for dinner in 5 minutes.” The best part—you can even broadcast from your phone to Google Home with your Assistant. Just say "Ok Google, broadcast: I'm on my way!”

Explore with Google Lens: We’re bringing an early preview of Google Lens to Pixel phones. At the start you’ll be able to look up landmarks, books, music albums, movies, and artwork, by tapping on the Lens icon in Google Photos. Over the next few weeks, we’ll add more capabilities, as well as the ability to use Lens in the Google Assistant. With the Assistant, it will provide a conversational experience for quick help with what you see, right in the moment.

Get things done with Pixelbook and Pixelbook Pen: On Pixelbook, your Assistant can help you send a quick email, create a new doc or get the details of your next calendar event. And with Pixelbook Pen, you can circle text or images on your screen to get more information or take action. Looking at a photo and wondering where the beautiful mountainscape is located? Circle it and let your Assistant do the rest.

On the go with Pixel Buds: Pixel Buds are optimized for the Google Assistant on Android phones, so you can play music, have notifications read to you, get directions or set a reminder, all without looking at your phone.

Control your smart home with Nest: With Nest Camera, you can say “Ok Google, show me the entryway on my TV” to your Assistant on Google Home and keep up with what’s going on in your home. Coming next year, with the Familiar Faces feature on Nest Hello, when the doorbell rings and Nest Hello recognizes the person at the door, it will automatically have the Assistant broadcast that information to all the Google Home devices in the house. So you can know who’s there right when they arrive.

So that’s what’s new with the Assistant. We’re continuing to make it more helpful and more available on new devices—whether you’re at home, on the go or somewhere in between—and in new languages and countries.

With all of the improvements built up over the past year, the Assistant can help you get more done and give you more time to focus on what matters. And we’re excited about what the future holds—with our expertise in natural language understanding, deep learning, computer vision, and understanding context, your Assistant will just keep getting better. Over time, we believe the Assistant has the potential to transform how we use technology—not only by understanding you better but also by giving you one, easy-to-use and understandable way to interact with it. All you have to do is say “Ok Google” to get help from your own personal Google.

Google Pixel Buds—wireless headphones that help you do more

What if your headphones could do more than let you listen to your favorite music? What if they could help you get things done without having to look at your phone? What if they could help you answer (almost!) any question just by asking, or even help you understand someone speaking a different language?

We wanted to make a more helpful pair of headphones, so today, we’re introducing Google Pixel Buds. These wireless headphones not only sound great, they are seamless to use and charge, offer help from the Google Assistant, and have a few extra smarts so you can get the answers you need while keeping your eyes up.

Blog_Apollo_3GRP-T_v08_SIMP.jpg

Fit them, charge them, pair them—made simple

From getting the right fit, to keeping them charged, Pixel Buds are really simple to use. They’ve got a unique fabric loop, making them comfortable, secure, and quick to adjust without having to swap out pieces. We put all the audio controls into a touchpad on the right earbud, so there aren’t any buttons hanging on the cord. Just swipe forward or backward to control volume and tap to play or pause your music. Charging and storing them is easy—they nestle right into a pocket-sized charging case that gives you up to 24 hours of listening time*. And pairing them is a cinch. Just open the charging case near your Pixel or Android phone running Android 7.0 Nougat or higher with the Assistant, and your phone will automatically detect them and ask you if you want to connect.

Get help from the Google Assistant with just a touch

Blog_Apollo_F_Left-Right_Blue_simp-v04.jpg

Pixel Buds bring Google smarts right to your ears, with answers and intel that would make James Bond jealous. Touch and hold the right earbud to ask your Assistant to play music, make a phone call, or get directions, all without pulling out your phone. If you have an upcoming meeting or you’re waiting on a text from a friend, the Assistant can alert you to a calendar event or incoming message, and even read it to you if you can’t look at your phone at that moment.

Be multilingual with Google Translate and Pixel

Pixel Buds can even translate between languages in real time using Google Translate on Pixel. It’s like you’ve got your own personal translator with you everywhere you go. Say you’re in Little Italy, and you want to order your pasta like a pro. All you have to do is hold down on the right earbud and say, “Help me speak Italian.” As you talk, your Pixel phone’s speaker will play the translation in Italian out loud. When the waiter responds in Italian, you’ll hear the translation through your Pixel Buds. If you’re more of a sushi or French food fan, no need to worry—it works in 40 languages.

170824_Apollo_2GRP-TTQ-RS_Oreo_simp-v03.jpeg

Pixel Buds come in three colors—Just Black, Clearly White and Kinda Blue—to match your Pixel 2. They’ll be available in November for $159 in the U.S. and are available to pre-order today. They’re also coming to Canada, U.K., Germany, Australia and Singapore in November.**

With Pixel Buds, we’re excited to put all the power of the Google Assistant into a pair of headphones you can take with you everywhere, so you can easily control your tunes, get walking directions to the nearest coffee spot or have a conversation with someone from another country without ever pulling out your phone.

*Total listening times are approximate and are measured using fully charged Google Pixel Buds and a fully charged case for the first Pixel Buds re-charge cycle. Actual results may vary.  Pixel Buds battery testing conducted in September 2017 on pre-production Pixel Buds connected to a pre-production Pixel 2 phone.

**This device is a prototype unit in Canada, UK, Germany, Australia, and Singapore and cannot be marketed, sold, leased or distributed until it complies with applicable essential requirements and obtains required legal authorizations.

Scroll Up