Consumption models helping to enable business transformation

Co-Author: Larry Scherzer, Director, Financial Product Development Digital transformation is really about business transformation and is driving the need for new consumption models. Gartner reports that 47 percent of CEOs are being challenged by their board of directors to make progress in digital business.  The demands are becoming more diverse and complex as new business applications […]

From Phone Trials to Total Connection

Many of us feel constantly connected through our cell phones, but a missed call or text isn’t the end of the world. Except for lawyers, every phone call matters. A major California firm was going through some issues with their phones. Their provider’s solution couldn’t keep up with their workload. And they knew dropped calls […]

Creatives are Pressured to Deliver, Yet Constantly Interrupted

When Kristina, a graphic designer in San Francisco, first logs in to her work computer for the day, she heads to her to-do list to check what needs to be done. Then she launches Adobe Photoshop or Illustrator to work on her vector illustrations or web design mock-ups. Thus begins her day-long marathon of switching between tools.

Rethinking iOS Apps in the Enterprise

Enterprises are deploying more and more business critical applications on their networks, making it very important that these apps have higher priority for performance. With the recent partnership announcements between Cisco and Apple, application developers now have the power to enable “Quality of Service” (QoS) tags right from within their application.   As an app developer, […]

Working with Multiple JobServices

Posted by Isai Damier, Software Engineer, Android DA

Working with Multiple JobServices

In its continuous effort to improve user experience, the Android platform has
introduced strict limitations on background services starting in API level 26.
Basically, unless your app is running in the foreground,
the system will stop all of your app’s background services within minutes.

As a result of these restrictions on background services,
JobScheduler jobs have become the de facto solution for performing
background tasks. For people familiar with services, JobScheduler
is generally straightforward to use: except in a few cases, one of which we
shall explore presently.

Imagine you are building an Android TV app. Since channels are very important to
TV Apps, your app should be able to perform at least five different background
operations on channels: publish a channel, add programs to a channel, send logs
about a channel to your remote server, update a channel’s metadata, and delete a
channel. Prior to Android 8.0 (Oreo) each of these five operations could be
implemented within background services. Starting in API 26, however, you must be
judicious in deciding which should be plain old background Services
and which should be JobServices.

In the case of a TV app, of the five operations mentioned above, only channel
publication can be a plain old background service. For some context, channel
publication involves three steps: first the user clicks on a button to start the
process; second the app starts a background operation to create and submit the
publication; and third, the user gets a UI to confirm subscription. So as you
can see, publishing channels requires user interactions and therefore a visible
Activity. Hence, ChannelPublisherService could be an IntentService
that handles the background portion. The reason you should not use a
JobService here is because JobService will introduce a
delay in execution, whereas user interaction usually requires immediate response
from your app.

For the other four operations, however, you should use JobServices;
that’s because all of them may execute while your app is in the background. So
respectively, you should have ChannelProgramsJobService,
ChannelLoggerJobService, ChannelMetadataJobService,
and ChannelDeletionJobService.

Avoiding JobId Collisions

Since all the four JobServices above deal with Channel
objects, it should be convenient to use the channelId as the
jobId for each one of them. But because of the way
JobServices are designed in the Android Framework, you can’t. The
following is the official description of jobId

Application-provided id for this job. Subsequent calls to cancel, 
or jobs created with the same jobId, will update the pre-existing 
job with the same id. This ID must be unique across all clients 
of the same uid (not just the same package). You will want to 
make sure this is a stable id across app updates, so probably not 
based on a resource ID.

What the description is telling you is that even though you are using 4
different Java objects (i.e. -JobServices), you still cannot use the same
channelId as their jobIds. You don’t get credit for
class-level namespace.

This indeed is a real problem. You need a stable and scalable way to relate a
channelId to its set of jobIds. The last thing you
want is to have different channels overwriting each other’s operations because
of jobId collisions. Were jobId of type String instead
of Integer, the solution would be easy: jobId= "ChannelPrograms" +
channelId
for ChannelProgramsJobService, jobId= "ChannelLogs" +
channelId
for ChannelLoggerJobService, etc. But since
jobId is an Integer and not a String, you have to devise a clever
system for generating reusable jobIds for your jobs. And for that,
you can use something like the following JobIdManager.

JobIdManager is a class that you tweak according to your app’s
needs. For this present TV app, the basic idea is to use a single
channelId over all jobs dealing with Channels. To
expedite clarification: let’s first look at the code for this sample
JobIdManager class, and then we’ll discuss.

public class JobIdManager {

   public static final int JOB_TYPE_CHANNEL_PROGRAMS = 1;
   public static final int JOB_TYPE_CHANNEL_METADATA = 2;
   public static final int JOB_TYPE_CHANNEL_DELETION = 3;
   public static final int JOB_TYPE_CHANNEL_LOGGER = 4;

   public static final int JOB_TYPE_USER_PREFS = 11;
   public static final int JOB_TYPE_USER_BEHAVIOR = 21;

   @IntDef(value = {
           JOB_TYPE_CHANNEL_PROGRAMS,
           JOB_TYPE_CHANNEL_METADATA,
           JOB_TYPE_CHANNEL_DELETION,
           JOB_TYPE_CHANNEL_LOGGER,
           JOB_TYPE_USER_PREFS,
           JOB_TYPE_USER_BEHAVIOR
   })
   @Retention(RetentionPolicy.SOURCE)
   public @interface JobType {
   }

   //16-1 for short. Adjust per your needs
   private static final int JOB_TYPE_SHIFTS = 15;

   public static int getJobId(@JobType int jobType, int objectId) {
       if ( 0 < objectId && objectId < (1<< JOB_TYPE_SHIFTS) ) {
           return (jobType << JOB_TYPE_SHIFTS) + objectId;
       } else {
           String err = String.format("objectId %s must be between %s and %s",
                   objectId,0,(1<<JOB_TYPE_SHIFTS));
           throw new IllegalArgumentException(err);
       }
   }
}

As you can see, JobIdManager simply combines a prefix with a
channelId to get a jobId. This elegant simplicity,
however, is just the tip of the iceberg. Let’s consider the assumptions and
caveats beneath.

First insight: you must be able to coerce channelId into a Short,
so that when you combine channelId with a prefix you still end up
with a valid Java Integer. Now of course, strictly speaking, it does not have to
be a Short. As long as your prefix and channelId combine into a
non-overflowing Integer, it will work. But margin is essential to sound
engineering. So unless you truly have no choice, go with a Short coercion. One
way you can do this in practice, for objects with large IDs on your remote
server, is to define a key in your local database or content provider and use
that key to generate your jobIds.

Second insight: your entire app ought to have only one JobIdManager
class. That class should generate jobIds for all your app’s jobs:
whether those jobs have to do with Channels, Users, or
Cats and Dogs. The sample JobIdManager
class points this out: not all JOB_TYPEs have to do with
Channel operations. One job type has to do with user prefs and one
with user behavior. The JobIdManager accounts for them all by
assigning a different prefix to each job type.

Third insight: for each -JobService in your app, you must have a
unique and final JOB_TYPE_ prefix. Again, this must be an
exhaustive one-to-one relationship.

Using JobIdManager

The following code snippet from ChannelProgramsJobService
demonstrates how to use a JobIdManager in your project. Whenever
you need to schedule a new job, you generate the jobId using
JobIdManager.getJobId(...).

import android.app.job.JobInfo;
import android.app.job.JobParameters;
import android.app.job.JobService;
import android.content.ComponentName;
import android.content.Context;
import android.os.PersistableBundle;

public class ChannelProgramsJobService extends JobService {
  
   private static final String CHANNEL_ID = "channelId";
   . . .

   public static void schedulePeriodicJob(Context context,
                                      final int channelId,
                                      String channelName,
                                      long intervalMillis,
                                      long flexMillis)
{
   JobInfo.Builder builder = scheduleJob(context, channelId);
   builder.setPeriodic(intervalMillis, flexMillis);

   JobScheduler scheduler = 
            (JobScheduler) context.getSystemService(Context.JOB_SCHEDULER_SERVICE);
   if (JobScheduler.RESULT_SUCCESS != scheduler.schedule(builder.build())) {
       //todo what? log to server as analytics maybe?
       Log.d(TAG, "could not schedule program updates for channel " + channelName);
   }
}

private static JobInfo.Builder scheduleJob(Context context,final int channelId){
   ComponentName componentName =
           new ComponentName(context, ChannelProgramsJobService.class);
   final int jobId = JobIdManager
             .getJobId(JobIdManager.JOB_TYPE_CHANNEL_PROGRAMS, channelId);
   PersistableBundle bundle = new PersistableBundle();
   bundle.putInt(CHANNEL_ID, channelId);
   JobInfo.Builder builder = new JobInfo.Builder(jobId, componentName);
   builder.setPersisted(true);
   builder.setExtras(bundle);
   builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_ANY);
   return builder;
}

   ...
}

Footnote: Thanks to Christopher Tate and Trevor Johns for their invaluable
feedback

Why All UX Designers Should Be Creating User Journeys, And Here’s How To Make One

Good design is all about the user. If designers truly want to create the best products, it’s important for them to see the product from the user’s perspective. That’s where a tool called a user journey comes in. In this article, I’ll introduce a concept of user journey along with some tips and specific examples.

Digital Transformation and “Cloud Ready”

We hear a lot about digital transformation, hybrid cloud, and “lift and shift (to cloud)” from various marketing sources. Let’s examine that. About two years ago I blogged and presented “Is Your Network Cloud Ready?” (See also the presentation with more basics in it.) See also some of my more recent blogs at netcraftsmen.com. This […]

Will Robots Take Over the World and Destroy All Our Jobs?

Welcome to the fight of the century. In one corner there are the prophets of doom, who say that no job is safe from automation, and economic chaos is inevitable. And in the other corner, we have the rosy optimists, who believe technology will usher in a new era of meaningful work, more leisure time, […]

Introducing Portfolio’s New Integration With Adobe Lightroom

Whether you’re a weekend adventurer or working the red carpet, Adobe Lightroom is a critical tool in every photographer’s kit. Designed with creatives like you in mind, Adobe Portfolio makes showcasing your work effortless. And it just got even better. Now, with the new Adobe Lightroom integration on Adobe Portfolio, you can easily import your Albums and publish your best shots on your customized website in just a few clicks.

Get the latest on building the innovative apps of the future right now – at Connect(); 2017

The inspiration for our Connect(); event has always been about developers and the innovative applications they create. I am excited to announce our popular and highly anticipated Connect(); 2017 returns Nov. 15-17. From the earliest days of computing, developers have shaped the future and changed the world. We’re at a critical inflection point where cloud, data…

The post Get the latest on building the innovative apps of the future right now – at Connect(); 2017 appeared first on The Official Microsoft Blog.

I. Am. A. Woman. In. Tech.

“You are a woman in tech. You are part of the 11% of women in cybersecurity. You make an impact. You’re not alone – you have the #wearecisco fam behind you. Don’t ever forget that.” – Tammy Nguyen shares her story of realizing she was, indeed, a woman …

Exploring contemporary art with Google Arts & Culture

Working with more than 180 partners all over the world, Google Arts & Culture is shining a light on contemporary art, with a new collection of online stories and rich digital content at g.co/ContemporaryArt.

Contemporary Art_GAC.png
GAC_Contemporary Art

Through an immersive digital journey, we bring you straight to the institutions housing the world’s seminal contemporary art collections with the help of high quality visuals, gigapixel resolution images—which allow you to zoom into the tiny details of a piece of art, and panoramic Museum View imagery. You can hear amazing stories about art from curators, artists, and experts from institutions all over the world.

With a repository of online exhibits and editorial features, we answer common questions about the contemporary art world, introduce you to the world’s leading contemporary artists and icons, and perhaps most importantly, the issues that are shaping art today.

Contemporary Art_Big Questions.png

Here are some of our favorites:

iconicartistvideogif copy.gif

Explore more stories and immersive digital content on contemporary art from over 180 partners around the world with the Google Arts and Culture app on Android and iOS.

Screenshot 2017-10-04 at 22.42.53.png

Look, Ma, no SIM card!

While phones have come a long way over the years, there’s one thing that hasn’t changed: you still need to insert a SIM card to get mobile service. But on the new Pixel 2, Project Fi is making it easier than ever to get connected. It’s the first phone built with eSIM, an embedded SIM that lets you instantly connect to a carrier network with the tap of a button. This means you no longer need to go to a store to get a SIM card for wireless service, wait a few days for your card to arrive in the mail, or fumble around with a bent paper clip to coax your SIM card into a tiny slot. Getting wireless service with eSIM is as quick as connecting your phone to Wi-Fi.  

You’ll see the option to use eSIM to connect to the Project Fi network on all Pixel 2s purchased through the Google Store or Project Fi. If you’re already a Project Fi subscriber, simply power up your Pixel 2 to begin setup. When you’re prompted to insert a SIM card, just tap the button for SIM-free setup, and we’ll take care of the heavy lifting.

For now, we’re piloting eSIM on the newest Pixel devices with Project Fi. We look forward to sharing what we learn and working together with industry partners to encourage more widespread adoption.

While we can talk about eSIM all we want, nothing beats trying it for the first time. If you’d like to give it a go, head over to the Project Fi website to sign up and purchase the Pixel 2.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).

Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.

You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

A new angle on your favorite moments with Google Clips

We love photos and videos. They take us back to a special time with our friends and family. Some of our favorites are genuine shots that capture the essence of the moment.

The trouble is, getting those spontaneous shots means that someone has to be the “designated photographer”—always waiting to snap a photo at just the right moment. I would have loved more images of me holding my kids, Clark and Juliet, when they were newborns, but because my wife and I had our hands full, these moments got away from us.

At Google, we’ve been working on a new type of camera that lets you capture more of these special moments, while allowing yourself also to be in the moment.

Today we’re introducing Google Clips, a lightweight, hands-free camera that helps you capture more genuine and spontaneous moments of the people—and pets!—who matter to you. You can set the camera down on the coffee table when the kids are goofing around or clip it to a chair to get a shot of your cat playing with its favorite toy. There’s also a shutter button—both on the camera and in the corresponding app—so you can capture other moments or subjects, whatever you please.

Clips2
Google Clips is small, weighs almost nothing, and comes with a clip to hold it steady.

We’ve put machine learning capabilities directly into Clips so when you turn it on, the camera looks for good moments to capture. Clips looks for stable, clear shots of people you know. You can help the camera learn who is important to you so when grandma comes in town, you’ll capture the grand entrance.

Clips3
The camera shoots short motion photos that last several seconds. As you probably guessed, we call these “clips.”

Your clips sync wirelessly and in seconds from the camera to the Google Clips app for Android or iOS. Simply swipe to save or delete your clips, or choose an individual frame to save as a high-resolution still photo. You can view and organize anything you’ve saved in Google Photos (or your favorite gallery app). And if you’re using Google Photos, you can backup unlimited clips for free.

Clips4

We know privacy and control really matter, so we’ve been thoughtful about this for Clips users, their families, and friends. Clips was designed and engineered with these principles in mind.

  • It looks like a camera, and lights up when it’s on so everyone knows what Clips does and when it’s capturing.
  • It works best when used at home with family and close friends. As you capture with Clips, the camera learns to recognize the faces of people that matter to you and helps you capture more moments of them.
  • Finally, all the machine learning happens on the device itself. And just like any point-and-shoot, nothing leaves your device until you decide to save it and share it.  

Google Clips is coming soon to the U.S. for $249. In this first edition, Clips is designed specifically with parents and pet owners in mind. It works best with Pixel, and also works with Samsung S7/8 and on iPhone (6 and up).

We hope Google Clips helps you capture more spontaneous moments in life, without any of the hassle.

Clips5
One of my favorite clips I’ve captured with my family.