Reader@mReotEch.com

Latest Tech Feeds to Keep You Updated…

This Holiday Season – Buy One IoT Device, Get Free CVEs

As the Internet of Things gains steam and continues to develop, so are adversaries and the threats affecting these systems. Companies throughout the world are busy deploying low cost Internet-connected computing devices (aka the Internet of Things) to solve business problems and improve our lives. In tandem, criminals are developing their methods for abusing and […]

Gear S3 Value Pack Update: Timeless Outside. Even More Revolutionary Inside.

Recognized as one of the best-in-class smartwatches in the wearable category, Samsung’s Gear S3 has been praised for its design, user friendliness and technological innovation. But with the recently released value pack update, the Gear S3 is more versatile than ever.

 

Packed with enhancements that augment the device’s utility and streamline users’ access to the information they rely on, the update transforms the Gear S3 into a controller, a tracker, a communicator and, of course, a watch – all in one device.

 

 

Workouts, Your Way

Exercising with the Gear S3 has been completely transformed, thanks to new features designed to make activity tracking more intuitive.*

 

Advanced, real-time heart rate monitoring with improved accuracy and detailed feedback lets users continuously monitor their heart rate activity – whether they’re enjoying a relaxing yoga session or an exhilarating kickboxing class. They can also control their weight more efficiently via the nutrition management feature, where they can easily add calories consumed, check their calorie balance and compare it to their daily target.

 

 

Fitness buffs looking to take their workout routine to the next level will appreciate the Samsung Health Fitness Program feature, which lets them watch exercise programs from their synced smartphone on a TV. Once connected, they can use their Gear S3 to control the displayed content, display their heart rates on the TV.

 

 

Centralized Communications, Right from Your Wrist

Despite its productivity and fitness features, the Gear S3 is not just a lifestyle device. In fact, it’s a communicator that makes getting in touch and staying on task easy and efficient.

 

In addition to searching contacts via the device, users can also now create new contacts right from the screen of the Gear S3. They can also create events along with related information such as date and time, reminder alerts, and location (text only) with a few simple taps and twists of the bezel.

 

 

Furthermore, rather than just checking reminders created on the Gear S3, users can also now view and edit checklist, video and web reminders created on their synced smartphone on the smartwatch. For instance, they can create a grocery list on their mobile device and tick items off right from their wrist as they add them to their shopping cart.**

 

 

 

A UX Optimized for the Way You Use Your Device

The Gear S3 sets users free from their phone; a turn of the device’s signature rotating bezel is all it takes to respond to calls, read messages or access an app. But with the latest updates, the device’s UX is even more seamless and user-friendly.

 

Widgets, for example, have been optimized to fit the newly enhanced circular display of the Gear S3 so that more information can be viewed at a glance. A band has been added around the perimeter of the screen along with widget-specific text such as contact names, detailed weather information and the remaining time before an alarm is set to go off.

 

 

By rotating the bezel at a faster or slower rate, users can view more or less information, respectively. For instance, if a user wants to change their device’s watch face, they can see more design options on the screen at once by turning the bezel at a faster speed.

 

 

Users can also use the bezel to naturally move from a text message notification to the reply input. Should they not have enough time to send a detailed reply, they can make use of even more default quick replies to express themselves in a snap. They can also create and edit their own quick replies directly on the smartwatch.

 

 

Gear S3 owners also now have the option to sort apps in the order in which they were most recently used in addition to being able to customize their location. The Moment Bar, which allows users to adjust the volume, check the battery level and more, is easily accessed with a swipe up or down from any screen.

 

To top things off, Samsung Gear, the app that Gear S3 owners use to sync their smartwatch with their smartphone, has been enhanced with a modern, image-focused design to better harmonize with the classic aesthetic of the device.

 

 

Enhanced Control for Ultimate Connectivity

With a focus on connectivity, Samsung’s ever-growing connected ecosystem brings about the need for more control. With its large touchscreen and rotating bezel, the Gear S3 is the perfect tool for controlling one’s devices, and is only enhanced with the new value pack update.

 

Users can now manage their compatible Samsung IoT-enabled devices right from their Gear S3 with Samsung Connect. The smartwatch also functions as a remote control for PowerPoint presentations and Samsung Gear VR, adding an element of convenience to both work and play.

 

The Gear S3 value pack update is now available for download via the Samsung Gear app.

 

 

* To activate these new features, the Samsung Health app must first be updated on the smartphone synced to the Gear S3.

** The Reminder function only works with the Gear S3 if the Reminder app is downloaded on the synced smartphone, which is limited to the Galaxy S8, Galaxy S8+ and Galaxy Note8.

Archer Goes Undercover at Comic Con

It didn’t take a secret agent to figure out that Archer’s appearance at this year’s Comic Con in San Diego was a little out of the ordinary. Typically appearing on a video to inform the audience why he can’t attend the annual event, the protagonist of the spy sitcom by the same name quickly proved that he was, indeed, a live presence.

Rather than create another video featuring Archer half-heartedly apologizing for his absence, the show’s production team thought it would be fun to change things up, and have Archer interact in real-time with fans. Leading the charge was the show’s technical director, Bryan Fordney of Floyd County Productions, who used Adobe Character Animator to achieve the live animation.

ARCHER—Pictured: Sterling Archer (voice of H. Jon Benjamin). CR: FXX

“We’ve always used Adobe software to produce the show,” explains Bryan. “Using Adobe Character Animator made a lot of sense because it features a similar toolset, and we already had a lot of the artwork on hand.”

At the start of the panel, the Archer character appears on screen, explaining that thanks to some time off, he was able to be at Comic Con. The audience didn’t quite grasp the meaning of this until he called out a woman in red with a “pink thing” in her hair, and asked her to stand up. It was then that the crowd realized there was more to Archer’s appearance than usual.

Meanwhile behind the screen, Bryan and H. Jon Benjamin, the voice actor behind the Archer character, sat at a computer with a lens out to the crowd. While Jon spoke to the audience, Bryan worked the keyboard, enabling Archer to blink, point, and turn his head accordingly. It didn’t take long before Jon caught on, eventually taking control of the keyboard and adding a few gestures of his own.

The interaction didn’t end there. During the Q&A session towards the end of the panel session, Archer occasionally interrupted to add his own commentary—and the audience loved it.

“He got some really good laughs,” says Casey Willis, co-executive producer on Archer. “People were really impressed by the technology.”

The production team was able to take their concept and turn it into reality thanks to their extensive experience working with Adobe Creative Cloud apps to produce the series. They use Adobe After Effects for compositing and animation, Adobe Illustrator to draw the characters, and Adobe Photoshop for background paintings—all solutions within Creative Cloud for teams. Because of this, the artwork was already in place to build the 3D rig.

“We took the artwork and started to experiment using some of the templates in Character Animator,” says Bryan. He built on it from there, piece by piece. The final rig was quite complex, and included several head angles and customized mouths to make Archer’s speech appear as fluid as possible.

Throughout the process Bryan benefitted from some expert advice from Adobe. “The Adobe engineers were great, and gave me some valuable tips on how to tweak it for a live show,” he says. “We spent a lot of time modifying the rig so that Archer looked as natural as possible.”

This side project was in addition to daily production work on the show, which is now entering its ninth season. Storyboards and voiceovers are edited in Premiere Pro. Once cut and approved, they go to various departments for specialized work. The illustration team draws the key poses for every action, every costume, and all of the other visual elements of the show. The background team works with the 3D department to render the backgrounds, which are then painted over in Photoshop.

Once the illustrations and background are complete, they’re sent to After Effects, which contains customized workflows that merge the application’s animation capabilities with its compositing features. The file is then rendered into Premiere Pro, where the final cut is done.

Despite a grueling production schedule, Bryan somehow found time to get his feet wet with Character Animation. With the basics now under his belt, fans are left wondering: will Archer make another live appearance at next year’s Comic Con?

“Archer’s plans for next year’s Comic Con remain top secret,” says Bryan. “Everyone will have to just wait and see.”

Learn more about Adobe Character Animator CC

Try Adobe Character Animator CC

Strides in Stewardship, Part 1: Innovating the Mobile Antenna to Maximize User Safety and Device Performance

Radio frequency (RF) radiation (sometimes referred to as “cell phone radiation”) represents one of the most common environmental influences, about which anxiety and speculation are quickly spreading. Amid these growing concerns, Samsung Electronics has been working in earnest to design its smartphone and wearable technology so that RF exposure is minimized as much as possible.

 

By establishing internal Specific Absorption Rate (SAR) standards, enhancing antenna design and maintaining close collaboration between teams during product development stages, the company has been able to produce a number of mobile devices with some of the lowest SAR levels on the market.

 

In the first installment of the “Strides in Stewardship” series, we will take a closer look at some of the initiatives taken by Samsung to achieve this accomplishment.

 

 

So What Exactly Is SAR?

When electromagnetic waves are received and transmitted by a wireless device such as a smartphone, some of the RF energy is lost to the surrounding environment and can be absorbed by the person handling the product. SAR is the rate at which human biological tissues – such as those in the body and head – absorb these stray RF signals from the source being measured.

 

SAR provides a straightforward method for measuring the RF exposure characteristics of mobile devices to ensure that they are within the safety guidelines based on standards that have been developed by independent scientific organizations through exhaustive evaluation of scientific studies.

 

In the US, the Federal Communications Commission (FCC) requires that all phones sold have an SAR level at or below 1.6 watts per kilogram (W/kg) averaged over 1 gram of actual tissue – a guideline based upon the standards developed by IEEE, NCRP and input from other federal agencies. Similarly, Japan, China and most European nations follow the guidelines specified by the International Commission on Non-Ionizing Radiation Protection (ICNIRP), which sets the SAR limit at 2 W/kg averaged over 10 grams of actual tissue.

 

* The SAR levels indicated in this chart are based on US devices. SAR levels may vary by model and region. For more information, visit: https://www.samsung.com/sar/

 

Samsung is so committed to ensuring the safety of its users that its internal maximum SAR level is half of those fixed by the previously mentioned international standards. This is emphasized in the chart above, which lists the company’s smartphones launched in 2017 and their respective SAR levels. These levels, particularly the Head SAR levels, are some of the lowest on the market today.

 

To achieve the lowest SAR levels possible, Samsung has dedicated many of its resources to research and development, particularly in the area of antenna design. That’s because RF waves from mobile phones are emitted from the antenna, and the closer the antenna is to body tissue, the greater a person’s exposure to RF energy is expected to be.

 

 

Taking the Lead in Antenna Innovation

In 2006, Samsung adopted an innovative antenna system to enhance device safety by maintaining significantly lower SAR levels compared to other products while simultaneously improving overall RF performance. One way the company has been successful in achieving this goal is by making antenna location a priority in its smartphone design.

 

“Antenna placement is incredibly important. A one-millimeter difference in location can reduce the SAR level by a large margin,” noted Yoonjae Lee, Antenna Group, Mobile R&D Office, Samsung Electronics. “The location of the antenna has to be decided during the product planning stage, as it’s difficult to change its placement later on.”

 

Over the past few years, Samsung has focused its efforts on embedding the TX antennas – the antennas that are most closely associated with higher SAR rates – at the bottom of the phone body. In doing so, cell phone radiation is directed away from the head of the user, where the phone is typically held during phone calls and the part of the body where RF energy poses the biggest health concerns.

 

But recently, design trends have shifted to favor metal phone bodies with bigger displays, making antenna positioning more challenging.

With today’s devices, the external metal frames of the phone body become the antenna itself. Therefore, antenna placement and general phone design have to be considered simultaneously, which requires the close cooperation of multiple teams. To facilitate this process, Samsung conducts a number of simulations using proprietary software and tools to determine the best location for the antenna based on the phone’s design, without having to use a physical prototype of the device.

 

In addition to the placement of the antenna, certain device materials and even chemicals in the product’s color coating can affect SAR levels as well as the performance of the antenna, further complicating the product development process. As a result, Samsung’s Antenna Group works closely with different design teams, including the Color, Material and Finish (CMF) team.

 

“When the CMF team proposes the use of a new material – which was the case with the Galaxy Note8 – the design team commissions a detailed substance analysis of the material,” said Lee. “The Antenna Group then conducts various tests to determine all the possible ways the material might influence the behavior of the antenna.”

 

As product development advances, Samsung tests SAR levels at every stage, considering the highest output when the body of the device and the head of the user are in close contact. Should the anticipated SAR levels of a product come out higher than the company’s standards during any stage of product development, the process is halted and modifications are made until the SAR levels are satisfactory.

 

To test the SAR level of a mobile device, Samsung conducts 2D and 3D scans using a special liquid-form material that has similar permittivity and conductivity levels as those of human body tissue. Measurements are made at different frequencies representing the frequency bands at which the device can transmit.

 

Samsung has also applied its antenna know-how to its wearable devices, which boast SAR levels that are significantly lower than those of the products of the company’s competitors.

 

“Many other wearable manufacturers are struggling with SAR levels. Even if they try to utilize the appropriate materials and placement for the antenna, they lack relevant technology patents,” said Inyoung Lee, Antenna Group, Mobile R&D Office, Samsung Electronics. “But Samsung’s antenna design is completely different from those of other brands. As a result, we are able to sustain low SAR levels while ensuring optimal antenna performance.”

 

 

An Ongoing Effort

In an effort to maintain its momentum in antenna innovation going forward, Samsung earlier this year established the Antenna Design Studio, an initiative that the company hopes will help systemize the antenna design stage by integrating its many different parts – including software simulations and hardware testing – to minimize repetitive stages and determine the most efficient performance.

 

The company has also established an internal task force to determine which fields of expertise are needed to enhance antenna design (and thus continue to lower SAR levels), and are working consistently to grow its workforce to include highly trained antenna technicians and material experts.

 

In doing so, the company strives to continue to lead the smartphone industry through antenna development, all the while producing mobile devices with the lowest SAR levels possible.

Defying gravity: an epic stunt at the Guggenheim Bilbao

When the Guggenheim Bilbao museum opened 20 years ago it was described by many as a starship from outer space. Its swirling roof is made of paper-thin titanium tiles—33,000 of them—covering the building like fish scales. At the time, it was such a novelty that the museum had to commission a chemical laboratory to produce a custom liquid to clean the titanium!

GOOGLE.TRASHHAND-14.JPG
Guggenheim Bilbao (photo by Trashhand)

The museum was an unusual experiment not just because of its gleaming shell. Over two decades ago, following the collapse of the traditional industries Bilbao was built on, the city was scarred with industrial wastelands, abandoned factories, and a community afflicted by unemployment and social tensions. Bilbao surprised the world (and raised a few eyebrows) with a unique idea to kickstart the city's regeneration, and they set out to build—not new factories or new roads—but instead a new center for modern art.


Since then, the museum has attracted 19 million visitors and became the epicenter of the urban renewal that rippled through Bilbao. Today it stands as an icon of the city and its successful self-transformation. To celebrate the Guggenheim's 20th anniversary, Google Arts & Culture partnered with the museum to bring their stories to you and show it from a new angle.


But how do you find a new angle on one of the world's most photographed buildings? Google invited Johan Tonnoir—known for running and jumping across Paris's busy rooftops with only a pair of sturdy shoes—to the Guggenheim.

Johan explored the building in his own way … through a breathtaking stunt-run across the building and its iconic slippery roof. He climbed to the highest peak and jumped, flipped and leapt from one wing of the roof to the other at 50 meters high. And all along, urban photographer Trashhand from Chicago followed him with his lens.

Check out the museum’s masterpieces on Google Arts & Culture (but please don't try to do it Johan's way…). You can see all this online at g.co/guggenheimbilbao or in the Google Arts & Culture app on iOS and Android.

Samsung Introduces New Premium Ultrasonic Diagnosis Device ‘RS85’

 

Samsung Medison, a global medical equipment company and an affiliate of Samsung Electronics, today introduced the RS85, a new premium ultrasonic diagnosis device that provides enhanced image quality, usability, and convenience for medical and radiology professionals.

 

“We are pleased to launch the RS85, a new premium medical device with superior image quality and usability based on Samsung’s advanced ultrasonic and radiology technologies,” said Insuk Song, Vice President, Health & Medical Equipment Business, Samsung Electronics. “We have high expectations the RS85 will make inroads into the global radiology market as Samsung Medison’s representative product. We will continue to diversify our product portfolio for different medical applications and sectors.”

 

Among its features, the RS85 features the MV-Flow™ and S-Shearwave Imaging™ technologies. The MV-Flow™ is able to detect the blood flow in microvascular tissues which is hard to be detected via a conventional Doppler ultrasonic wave. This allows researchers to check for indication of any type of lesion related to cancer or inflammation. The S-Shearwave Imaging™ feature provides new indicators for clinical diagnosis by quantifying the elasticity of human anatomy via shear wave elastography which will increase the accuracy of diagnosis for diseases such as hepatocirrhosis and tumors.

 

Furthermore, the RS85 is equipped with CEUS+ for diagnosing blood flow or lesions using contrast agent images, and also provides expanded diagnostic range to the liver and breast areas as well as safe, accurate diagnosis for young children. The S-Fusion™ function extends analysis to the prostate gland, and also allows coordination and simultaneous comparative analysis of images and sonograms from other diagnosis instruments such as MRI and CT.

 

The RS85 also has a monitor arm that can reach even wider areas compared to Samsung’s conventional products. Also, the RS85 has been developed to decrease the fatigue of sonographers by adding a touchscreen for easier control of the device, and it can reduce the scanning time to improve usability.

 

Samsung Medison, which has achieved great results in ultrasonic diagnosis devices for obstetrics and gynecology, is strengthening the premium ultrasonic diagnosis device business for radiology. Based on the RS85, the company will continue to expand its products in large hospitals, and enhance research cooperation with medical experts around the world.

 

The RS85 will be available first in Korea and Europe in November, with more markets to follow.

Investing £1 million in training for computing teachers in the U.K.

Advancing our students’ understanding of the principles and practices of computing is critical to developing a competitive workforce for the 21st century.

In every field, businesses of all sizes are looking to hire people who understand computing, so we need more students to leave school confident in skills like coding, computational thinking, machine learning and cybersecurity.

The U.K. has already led the way in preparing for this future by making computer science education a part of the school curriculum in 2014. But we know there is more to do to ensure young people in every community have access to world-class computer science education.

A recent report from the Royal Society found that despite the good progress in recent years, only 11 percent of Key Stage 4 pupils take GCSE computer science. The majority of teachers are teaching an unfamiliar school subject without adequate support. These teachers are eager to offer computer science to their students but they need access to subject area training to build their confidence.

The U.K. government’s announcement that they’re investing £100 million for an additional 8,000 computer science teachers supported by a new National Centre for Computing is an encouraging step forward. It builds on the progress that’s been made since computing was added to the curriculum in 2014 by helping to ensure teachers have the specialist training and support they need to educate the next generation of British computer scientists.

We want to continue to play our part too.

Today we're announcing £1 million in grants to support training for secondary school computing teachers in the U.K.

The Google.org grant will allow the Raspberry Pi Foundation, the British Computer Society and the National STEM Learning Centre to deliver free computer science and pedagogy training for thousands of key stage 3 and key stage 4 teachers in England over three years, with a specific focus on disadvantaged areas.

A Raspberry Pi and Google teacher training workshop in Leeds, UK
A Raspberry Pi and Google teacher training workshop in Leeds, U.K.

Through this effort, they will make make online courses and professional development resources available to teachers anywhere, anytime, for free, and deliver free in-person workshops for teachers across the country.

Googlers care deeply about helping to develop our future computer scientists, and many of them will give their time and skills to this program. A team of Google engineers and learning and development specialists will volunteer with Raspberry Pi to ensure that all teachers are able to access the online resources and courses.


This grant is part of Google’s long-standing commitment to computer science education. Through Google.org, we’ve given nearly $40 million to organizations around the globe ensuring that traditionally underrepresented students have access to opportunities to explore computer science.


In the U.K., we also support teacher recruitment and professional development by teaming up with organizations like Teach First and University of Wolverhampton, and we focus on inspiring more children, especially girls and those from disadvantaged areas, to take up computing through Code Club UK after-school clubs.


CS education and computational thinking skills are key to the future, and we’re committed to supporting Raspberry Pi—and other organizations like them—to ensure teachers and young people have the skills they’ll need to succeed.

How Sweden’s Oxievång School helps teachers navigate the journey to the “learning island”

Editor’s note: Google has just completed its first-ever Google for Education Study Tour, bringing nearly 100 educators from 12 countries around Europe to Lund, Sweden, to share ideas on innovating within their systems and creating an environment that embraces innovation.. One of the highlights of the two-day event was a visit to Oxievång School in Malmö, where principal Jenny Nyberg has led their adoption of technology in the classroom. Below, Jenny explains how to support teachers during a period of technology adoption.

When we’re introducing new technology for our classrooms, I tell my teachers to imagine the ultimate goal as an island we all have to swim toward. Some of us are very fast swimmers, and we’ll figure out how to get to the island quickly, and even get around any sharks. Some of us are slow swimmers, and may be hesitant to jump in, but the strong swimmers will show us the way (and how to get around the sharks). Eventually, we all have to jump into the water.

Bringing tech-based personalized learning into the classrooms at Oxievång School was our “island” and we’ve all completed the journey, which was particularly important given that our school, like the city of Malmö itself, is a mix of different people with varying needs. We have immigrant students as well as native Swedes; 40 percent of our students speak Swedish as their second language. But all students can  become strong learners when teachers discover what motivates and excites them. When we adopted G Suite for education, our “fast-swimmer” teachers showed their colleagues how they could now customize learning for each and every student.

Oxievång School_Jenny.png
Jenny Nyberg during school visit

As school leaders, my vice principals and I served as role models for using G Suite— not just for teaching, but for making our jobs easier too. We showed teachers how to use Google Sites to store information we needed every day, like staff policies and forms. We walked teachers through the Google Docs tools that allow them to comment on student work immediately rather than waiting to receive homework from students, and giving feedback much later. When teachers saw this in action, they understood how adopting G Suite was going to make a big difference for their teaching effectiveness and their productivity.

If you want teachers to become enthusiastic about using new technology, they need to be confident in their use of the new technology. For this, you have to give them support.  So we hired a digital learning educator who works exclusively with teachers to help them build up their technology skills. Every teacher receives a personalized development plan with a list of resources for training.

Our students have become more engaged in their coursework as teachers have become better at using Google technology to personalize learning. If students are curious about a subject, they can use their Chromebooks and G Suite tools to further explore the topic on their own. They also interact with teachers more often, even using Hangouts to meet with teachers outside of the classroom. As teachers become more confident, their enthusiasm spreads to the students.

Oxievång School_students.png
One of the stations included students demonstrating robots they programmed with their Chromebooks

Once we give teachers basic training, we keep supporting them so that the transformation spreads throughout the school. When they need extra help with using G Suite, teachers know where to find it: they can schedule a meeting with the digital learning educator. We have team leaders across grades and subjects who help teachers’ follow their development plans. Once a month, we all meet at school sessions called “skrytbyt,” which roughly translates as “boost exchange.” In these sessions teachers trade stories about lessons that went well and ask for advice about how to improve lessons they find challenging. Sharing knowledge is a great way to build confidence.

As leaders in education, we have to be honest with teachers and acknowledge that change isn’t easy, but assure them that we’re here for them. Teachers worry that students know more about technology than they do—students are the digital natives, while teachers are the digital immigrants. We constantly remind teachers that they can find inspiration in each other and in their students’ knowledge, so that we all make it to the island together.

Live Stream Series | Make Good Videos GREAT with Audio

Earlier this year, Jason Levine launched a 7-part live stream series on How to Make Great Videos. In the series, Jason walks through best practices for creating video content – from importing footage to sharing with the world.

As a follow-up to that series, Jason is diving deeper into the best ways to take your video skills to the next level. He’ll spend three weeks focused on each of these topics: Motion GraphicsAudio, and Color. The Audio miniseries kicks off on December 1st.

Jason will demonstrate LIVE on the Adobe Creative Cloud Facebook page each Friday at 9amPT/12pmET/6pmCET throughout this series. Get notified when streams go live by following the Facebook page and signing up for the event:Audio. Bookmark this video playlist on the Adobe Creative Cloud YouTube channel for replays of every stream with timestamped chapters noted in the descriptions.

Great audio makes a good impression and keeps viewers engaged with video content. Here’s what you’ll learn in this series:

Week 1: Audio in Premiere Pro CC | Live on Facebook December 1, 2017 at 9amPT

Hear how deep you can go into audio workflows without ever leaving Premiere Pro! Get a tour of the Essential Sound panel in Premiere Pro CC (and Audition, too!), with easy-to-access workflows for optimizing vocals, matching levels, and fine-tuning ambient noise. Learn best practices for working with submix tracks and Dynamics effects.

Adobe tools in the spotlight: Premiere Pro, Audition

Week 2: Audio in Audition CC | Live on Facebook December 8, 2017 at 9amPT

This session will look at the handful of unique features found in the Essential Sound panel in Audition, including Remix and Auto-ducking. Jason will also show how to bring a video file in, break out the audio channels, make changes, and export back to a video file. Learn how to remove background noises, add effects and ambience, and incorporate sound design.

Adobe tools in the spotlight: Audition, Premiere Pro, Media Encoder

Week 3: Putting It All Together – Audio in Premiere Pro CC and Audition CC | Live on Facebook December 15, 2017 at 9amPT

Get your audio adjustments started in Premiere Pro, then – through powerful, lossless interchange – seamlessly bring your project into Audition for final mastering and delivery. And learn how to create broadcast-safe audio, as well as all the options for exporting your project through Media Encoder.

Adobe tools in the spotlight: Premiere Pro, Audition, Media Encoder

Next up…

Now that your videos have stunning graphics and audio, learn how to boost the Color quality in the next series. Sign up now for a reminder when these sessions go live on Fridays at 9amPT:

Color: January 12 – January 26, 2017

Fact-checking the French election: lessons from CrossCheck, a collaborative effort to combat misinformation

Nine months ago, 37 newsrooms worked together to combat misinformation in the run-up to the French Presidential election. Organized by First Draft, and supported by the Google News Lab, CrossCheck launched a virtual newsroom, where fact-checkers collaborated to verify disputed online content and share fact-checked information back to the public.


The initiative was a part of the News Lab’s broader effort to help journalists curb the spread of misinformation during important cultural and political moments. With a recent study finding that nearly 25% of all news stories about the French Presidential election shared on social media were fake, it was important for French newsrooms to work closely together to combat misinformation in a timely fashion. 


Yesterday at our office in Paris, alongside many of the newsrooms who took part in the initiative, we released a report on the project produced by academics from the University of Toulouse and Grenoble Alpes University. The report explored the impact the project had on the newsrooms and journalists involved, and the general public.

  A few themes emerged from the report:

  • Accuracy in reporting rises above competition. While news organizations operate in a highly competitive landscape, there was broad agreement that “debunking work should not be competitive” and should be “considered a public service." That spirit was echoed by the willingness of 100 journalists to work together and share information for ten weeks leading up to Election Day. Many of the journalists talked about the sense of pride they felt doing this work together. As one journalist put it, “debunking fake news is not a scoop.”    
  • The initiative helped spread best practices around verification for journalists. Journalists interviewed for the report discussed the value of the news skills the picked up around fact-checking, image verification, and video authentication—and the lasting impact that would have on their work. One journalist noted, “I strengthened my reflexes, I progressed in my profession, in fact-checking, and gained efficiency and speed working with user generated content.” 
  • Efforts to ensure accuracy in reporting are important for news consumers. The project resonated with many news consumers who saw the effort as independent, impartial and credible (reinforced by the number of news organizations that participated). By the end of the election, the CrossCheck blog hit nearly 600,000 page views, had roughly 5K followers on Twitter and 180K followers on Facebook (where its videos amassed 1.2 million views). As one news reader noted, "many people around me were convinced that a particular piece of misinformation was true before I demonstrated the opposite to them. This changed how they voted.”

You can learn more about the News Lab’s efforts to work with the news industry to increase trust and fight misinformation here.

‘Tis the season to Fi it Forward

With the season for giving right around the corner, we’re excited to kick off the Fi it Forward referral challenge. The challenge is rolling out today starting on desktop.


Like our last referral challenge, participants will earn prizes for the referrals they make throughout the challenge. In the Fi it Forward challenge, you can win up to two hardware gifts when you refer friends to Project Fi: a Google Chromecast and the new Android One moto x4.


But we’re most excited about our opportunity to pay it forward with our third gift. At the end of the challenge, Project Fi will donate $50,000 to the Information Technology Disaster Resource Center (ITDRC). We’re thrilled to see organizations like the ITDRC harness the power of communications technology to make a meaningful difference in crisis response and recovery, and we’re grateful to come together as a community to support their initiatives. Project Fi users don’t have to take any action to participate in the community gift—you’re already supporting the ITDRC’s disaster relief efforts just by being a part of Project Fi.


Ready to get started?. Remember to enter the challenge and get your referrals in by December 17. We can’t wait to Fi it Forward with all of you this holiday season.

Lights, shadows and silhouettes by #teampixel

Shadows don’t always have to be scary—they can be downright magical. This week, #teampixel is sharing everything from a solitary lemon’s shadow to palm trees silhouetted against a vivid sky in Venice, CA. Come chase shadows with us and see what you find.

If you’d like to be featured on @google and The Keyword, tag your Pixel photos with #teampixel and you might see yourself next.

7 ways the Assistant can help you get ready for Turkey Day

Thanksgiving is just a few days away and, as always, your Google Assistant is ready to help. So while the turkey cooks and the family gathers, here are some questions to ask your Assistant. 

thanksgiving

  • Show up to dinner on time: “Ok Google, how’s traffic?”
  • Prepare accordingly: “Ok Google, set a turkey timer for 4 hours.”
  • And don’t forget dessert: “Ok Google, add apple pie and pumpkin pie to my shopping list”
  • Play a game while you wait for turkey: “Ok Google, play Thanksgiving Mad Libs” 
  • Hear a funny tale: “Ok Google, tell me a turkey story” 
  • Learn something new: “Ok Google, give me a fun fact about Thanksgiving”
  • When Thanksgiving’s over, get ready for the next occasion:  “Ok Google, play holiday music” 

Happy Thanksgiving 🦃

Developing a VR game in just two weeks

Earlier this year, 3D modeler Jarlan Perez joined the Blocks team for a two-week sprint. The goal of his time with the team was to create a fully immersive virtual reality game in just two weeks using Blocks and Unreal Engine, two tools that have significantly influenced his process as a modeler and game enthusiast.

The result was “Blocks Isle,” the first level of a game that takes you on a journey to find your long lost friend in a sci-fi land of wonder. To win, you must solve a puzzle using hidden clues and interactions throughout the experience.

Blocks Isle - Scenes.gif

You start out on a strange desert island. After uncovering some clues and pulling a handy lever, a rocky pathway opens for exploration. Up ahead, hidden radios and books reveal clues to solve the puzzle.

Getting on Blocks Isle - SMALL.gif

Initial steps to get onto Blocks Isle. Levers and teleportation immerse the user in a new world.

Blocks Isle Clip.gif
Solving the puzzle on Blocks Isle

We caught up with Jarlan to hear more about his process and advice for other developers building immersive experiences using Blocks and Unreal Engine 4.

Brittany: Tell us about using Blocks and Unreal to develop a game in such a short amount of time.

Jarlan: Tag teaming both pieces of software worked very well! Blocks allowed me to visualize and be in the space during the modeling and conceptual phase. Unreal is like giving an artist magical powers: I’m able to fully build a proof of concept and implement functionality without having to be a professional programmer.

I found myself spending part of the day in Blocks experimenting with concepts and the rest in Unreal creating basic functionality for those ideas. This method allowed for rapid prototyping and was later beneficial when populating the space with art assets.

blocks4
Basic prototype in Unreal

What tips and tricks did you uncover that made it easy to build your game?

Being able to build large parts of the environment while standing smack dab in the middle of it is wonderful.

A big thing that I found myself doing is blowing the scene up to actual size, standing in it, and using a combination of the move grip and me moving my arms back and forth to simulate walking within the space. It helped me further understand how I wanted the player to navigate the space and where certain things needed to be placed. Again all within Blocks and no code.
Blocks Isle - Simulated Walking - SMALL.gif
Simulating walking through the experience in Blocks, as part of the creation process

Another general tip, the snap trigger is your friend! I’ve used it for most of my modeling in Blocks to snap and place assets.

Blocks Isle - Snapping - SMALL.gif
Using Blocks’ snapping feature to align shapes in the environment

How did you experiment with different ideas and concepts?


I had a few different concepts when I started the project. Blocks allowed me to quickly build a mock up of each for testing.

Blocks is an amazing tool for spatial prototyping. Before bringing a scene into Unreal, I’d blow it up to scale and move around in the space to see if it makes sense for what I’m trying to achieve. This saved me so much time.
BlocksIsle
Further development of the Blocks Isle concept

Without Blocks, how might this process have been different?

After all is said and done, I still had to take the geometry from Blocks and bring it into a 3D program for unwrapping and lightmap baking.

That said, even though I am proficient in traditional 3D modeling, I think the project would have taken longer to put together without Blocks. Blocks helped me take out some steps in the process. Traditionally I’d model out the scene and export pieces as I went, bringing them into the engine, placing them, and moving around to get a sense of how the space feels. All that got combined inside Blocks. Oh, and not to mention color exploration. If I wanted to try out colors I’d also have to create materials and place them on each asset during the in-engine test which takes more time. I can easily preview all of that in Blocks.

What advice would you give to other game developers about using these tools?


Keep exploring and always stay hungry. Be on the lookout for new tools that can improve your process and don’t be afraid of trying something new. If it doesn’t work out, it’s ok. We learn so much more from the challenges we take on than from the ones we don’t face by walking the easy path.

There are some amazing low poly games and artists out there. I think many artists would benefit from making models in VR using Blocks. If I was able to finish this project in two weeks, I can only imagine what a small team could do. Give it a try, and post your creations or questions using #MadeWithBlocks.

If you’d like to experience Blocks Isle on the HTC Vive, you can download the game.

The High Five: our searches go on, and on

Turkey, “Titanic” and the pope’s new ride were on our minds this week. Here are a few of the week’s top search trends, with data from the Google News Lab.

Almost time for turkey

As people in the U.S. prepare to gather around the table for Thanksgiving next week, our Thanksgiving insights page has all the trends. Pumpkin pie dominates searches in the U.S., but pecan pie is more popular in the southeast and apple pie is the state favorite in New Hampshire and Massachusetts. A smoked turkey is popular in most states, though some contend it should be roasted, fried or grilled. And Friendsgiving continues to rise in popularity, with searches like “friendsgiving ideas,” “friendsgiving invitations” and “friendsgiving games.”

We’ll never let go

Two decades ago, “Titanic” left an iceberg-sized hole in our hearts, and now it’s coming back to theaters in honor of its 20-year anniversary. In the years since its debut, search interest in “Titanic” reached its highest point globally in April 2012 when Titanic in 3D was released. All this talk of sinking ships made us think about other famous boats—the top searched shipwrecks this week include the Batavia, the Edmund Fitzgerald and the USS Indianapolis.

Hot wheels

The “popemobile” got an upgrade this week. Lamborghini gifted the pope a special edition luxury car, which he decided to auction off for charity. Though the pope is known for his affinity for Fiats, interest in “Pope Lamborghini” zoomed 190 percent higher than “Pope Fiat.” People also searched to find out, “Why did the Lamborghini company give the pope a car?” and “How much does the Lamborghini that they gave the pope cost?”

That’s a foul

Searches for “UCLA basketball players” shot 330 percent higher this week when three players returned home after being arrested for shoplifting while on tour with the team in China. The search queries dribbled in: “How long are the UCLA players suspended for?” “Why did China let the UCLA players go?” and “What were the UCLA players stealing?”

All about the music

With hits like “Despacito” and “Mi Gente” taking over the globe this year, the Latin Grammys last night were a hot ticket. People searched “How to watch the Latin Grammy awards online,” “What time are the Latin Grammy awards on?” and “How does music qualify for a Latin Grammy award?” Of the nominees for Record of the Year, “Despacito,” “Guerra,” and “Felices Los 4” were the most searched.

An AI Resident at work: Suhani Vora and her work on genomics

Suhani Vora is a bioengineer, aspiring (and self-taught) machine learning expert, SNES Super Mario World ninja, and Google AI Resident. This means that she’s part of a 12-month research training program designed to jumpstart a career in machine learning. Residents, who are paired with Google AI mentors to work on research projects according to their interests, apply machine learning to their expertise in various backgrounds—from computer science to epidemiology.

I caught up with Suhani to hear more about her work as an AI Resident, her typical day, and how AI can help transform the field of genomics.

Phing: How did you get into machine learning research?

Suhani: During graduate school, I worked on engineering CRISPR/Cas9 systems, which enable a wide range of research on genomes. And though I was working with the most efficient tools available for genome editing, I knew we could make progress even faster.

One important factor was our limited ability to predict what novel biological designs would work. Each design cycle, we were only using very small amounts of previously collected data and relied on individual interpretation of that data to make design decisions in the lab.

By failing to incorporate more powerful computational methods to make use of big data and aid in the design process, it was affecting our ability to make progress quickly. Knowing that machine learning methods would greatly accelerate the speed of scientific discovery, I decided to work on finding ways to apply machine learning to my own field of genetic engineering.

I reached out to researchers in the field, asking how best to get started. A Googler I knew suggested I take the machine learning course by Andrew Ng on Coursera (could not recommend it more highly), so I did that. I’ve never had more fun learning! I had also started auditing an ML course at MIT, and reading papers on deep learning applications to problems in genomics. Ultimately, I took the plunge and and ended up joining the Residency program after finishing grad school.  

Tell us about your role at Google, and what you’re working on right now.

I’m a cross-disciplinary deep learning researcher—I research, code, and experiment with deep learning models to explore their applicability to problems in genomics.

In the same way that we use machine learning models to predict the objects are present in an image (think: searching for your dogs in Google Photos), I research ways we can build neural networks to automatically predict the properties of a DNA sequence. This has all kinds of applications, like predicting whether a DNA mutation will cause cancer, or is benign.

What’s a typical day like for you?

On any given day, I’m writing code to process new genomics data, or creating a neural network in TensorFlow to model the data. Right now, a lot of my time is spent troubleshooting such models.

I also spend time chatting with fellow Residents, or a member of the TensorFlow team, to get their expertise on the experiments or code I’m writing. This could include a meeting with my two mentors, Mark DePristo and Quoc Le, top researchers in the field of machine learning who regularly provide invaluable guidance for developing the neural network models I’m interested in.

What do you like most about the AI Residency program? About working at Google?

I like the freedom to pursue topics of our interest, combined with the strong support network we have to get things done. Google is a really positive work environment, and I feel set up to succeed. In a different environment I wouldn’t have the chance to work with a world-class researcher in computational genomics like Mark, AND Quoc, one of the world’s leading machine learning researchers, at time same time and place. It’s pretty mind-blowing.

What kind of background do you need to work in machine learning?

We have such a wide array of backgrounds among our AI Residents! The only real common thread I see is a very strong desire to work on machine learning, or to apply machine learning to a particular problem of choice. I think having a strong background in linear algebra, statistics, computer science, and perhaps modeling makes things easier—but these skills are also now accessible to almost anyone with an interest, through MOOCs!

What kinds of problems do you think that AI can help solve for the world?

Ultimately, it really just depends how creative we are in figuring out what AI can do for us. Current deep learning methods have become state of the art for image recognition tasks, such as automatically detecting pets or scenes in images, and natural language processing, like translating from Chinese to English. I’m excited to see the next wave of applications in areas such as speech recognition, robotic handling, and medicine.

Interested in the AI Residency? Check out submission details and apply for the 2018 program on our Careers site.

Samsung Launches Newsroom in South Africa

An exterior view of Samsung Electronics South Africa headquarters located in Johannesburg, South Africa

 

Samsung today launched Samsung Newsroom South Africa, a go-to source of information for consumers and the media to keep in touch with the latest technologies and innovations from Samsung.

 

Samsung Newsroom South Africa will feature localized content, addressing the specific needs and interests of the region. Content shared on the platform will include not only the latest updates on Samsung, but also locally relevant launches and services, as well as compelling stories, interviews and opinions from Samsung South Africa.

 

Samsung entered South Africa in 1994 and has been an integral part of the region ever since, channeling a relentless drive for developing innovative technology solutions to enable customers to pursue their passions. A diverse range of Samsung’s high-tech products and services are available to South African consumers. The home-grown team plays an active role in South African life by participating in various Corporate Social Initiatives [CSI], as well as delivering solutions and services that are tailored to local needs.

 

Newsrooms are being rolled out all over the world to better serve with regionally significant content, on top of providing consumers with all the latest news and information about Samsung. Samsung Newsroom South Africa is the sixteenth of Samsung’s newsrooms, following the Global Newsroom and local editions in the U.S., Korea, Vietnam, Brazil, India (English and Hindi), Germany, Russia, Mexico, the U.K., Argentina, Malaysia, Italy, and Spain. Samsung Newsroom South Africa can be accessed by going to https://news.samsung.com/za/

Android Things Contest Winners

Posted by Dave Smith, Developer Advocate for IoT

Back in September, we worked with Hackster.io to encourage the developer community to build smart connected devices using Android Things and post their projects to the Developer Challenge for Android Things. The goal was to showcase the combination of turnkey hardware and a powerful SDK for building and maintaining devices at scale.

Thank you to everyone who participated in the contest and submitted a project or idea. We had over 1100 participants register for the contest, resulting in over 350 submissions. Out of that group, we've chosen three winners. Each winner will receive support and tools from Dragon Innovation to develop their concepts into commercial products. Join us in congratulating the following makers!

Best Enterprise Project: Distributed Air Quality Monitoring

Maker: James Puderer

Monitor air quality on a street-by-street level using Android Things, Google Cloud IoT Core, and taxis!

This project showcases how Android Things makes it easy to build devices that integrate with the various services provided by the Google Cloud Platform for robust data collection and analysis. It's a clever end-to-end solution that shows understanding of both the problem domain as well as the technology.

Best Start Up Project: BrewCentral

Maker: Trent Shumay and Steven Pridie

Brewing amazing beer is a balance of art, science, and ritual. The BrewCentral system makes it possible for anyone to do an all-grain brew!

BrewCentral pairs a real-time PID controller with the touch-enabled UI and decision-making compute power of Android Things. The result is a system that accurately controls the time, temperature, and flow rates necessary to achieve repeatable results during a brew cycle. The planned enhancements for cloud-based brewing recipes will make this a connected experience for the entire brewing community.

Best IoT Project: BrailleBox - Braille News Reader

Maker: Joe Birch

BrailleBox is a small piece of hardware that empowers users who are hard-of-sight to read the latest news articles in Braille.

This project is a great use case of using IoT to have a social impact. The current proof of concept streams articles from a news feed to the Braille pad, but this project has the potential to leverage machine learning on the device to translate additional input from the physical world into a Braille result.

Honorable Mentions

The community submitted some amazing projects for the contest, which made the choice of picking only three winners extremely difficult. Here are a few of our favorite projects that weren't selected for a prize:

  • Andro Cart: A shopping cart device powered by Android Things. Designed to help decentralize point of sale (POS) billing.
  • SIGHT: For the Blind: A pair of smart glasses for the blind, powered by Android Things and TensorFlow.
  • Industrial IoT Gateway: A smart industrial gateway for the IoT world based on Android Things.
  • Sentinel: The first semi-autonomous home security robot based on Android Things.
  • Word Clock: A creative take on reading the time, powered by Android Things. Control it via the Nearby API or the Google Assistant.

We encourage everyone to check out all the new projects in the Google Hackster community, and submit your own as well! You can also join Google's IoT Developers Community on Google+, a great resource to get updates, ask questions, and discuss ideas. We look forward to seeing what exciting projects you build!

Gaming on the Big Screen: 65” QLED TV and 49” QLED Monitor

In any sport, athletes and amateurs alike are concerned about how their equipment might impact their performance. For gamers, the capabilities of their hardware are fundamental to their experiences. It can be particularly frustrating when the viewing field of the game is interrupted by the bezels of three monitors, or by the slow response of the display to the commands of the gamer’s controllers. With more and more game enthusiasts turning to bigger, wider screens, a small question arises: play on Samsung’s 65” QLED TV Q8C or Samsung’s 49” CHG90 QLED Gaming Monitor?

 

 

 

Response Speeds to Suit Gamers’ Needs

Response speed – how fast the screen responds to commands or changes in the game – can help to make or break a gamer’s stats. When the difference between top scorer and average player rests on reactions, especially in racing, first-person shooter (FPS), flight simulation and action-heavy games, every millisecond can have an impact. Three main factors can influence response speed: MPRT, refresh rate and input lag.

 

MPRT: This stands for motion picture response time, thus indicating the real response time which is recognizable to the naked eye. The faster the screen’s response time, the lower its image retention, which helps to reduce eye strain and prevent “ghosting”, where the faint image of a moving object’s previous position is displayed. The QLED TV Q8C has a response time of between 2 and 8 milliseconds depending on the setting*, while the CHG90 QLED Gaming Monitor clocks in at 1 millisecond.

 

∙Refresh Rate: Many know that there is a strong correlation between a display’s frames per second and the fluidity of the motion on a screen. However, a smaller number are aware of the relationship between frames per second and the refresh rate (measured in Hz). A display’s refresh rate is how often the image on the screen is refreshed, with a higher refresh rate providing smother motions and less blurring, helping gamers keep their focus steady. While many TVs offer a refresh rate of 60Hz as standard, the QLED TV Q8C’s refresh rate is 120Hz and the CHG90 QLED Gaming Monitor refreshes at 144Hz.

 

 

∙Input Lag: A display’s input lag plays a vital role in increasing response speed, as it determines the time the monitor or TV takes to react to commands made by game controls – whether it’s a keyboard or a console’s controllers. As input lag decreases, and the screen responds more quickly to commands, player reactions can be even faster. At 20 milliseconds when set to “Game Mode”, the input lag of Samsung’s QLED TV Q8C is the lowest amongst TVs on the market, and Samsung’s CHG90 QLED Gaming Monitor fully supports Radeon FreeSync 2™ technology, which aims to reduce input lag to a minimum.

 

 

Burn-In Free, Guaranteed

 

Screen burn-in has certainly become a hot topic recently as more consumers are encountering the problem of image retention, especially in their OLED displays. Since it is often attributed to parts of the screen being static for an extended period, gaming in particular runs the risk of screen burn-in after prolonged play.

 

However, even while gaming Samsung QLED screens (both TVs and monitors) are free from burn-in because they utilize the new metal Quantum Dot technology. Quantum Dots consist of semiconductor nanocrystals, which can produce pure monochromatic light. Since they are material particles embedded in minerals, including rocks and sand, the inorganic material does not contain carbon compounds, and thus is not as susceptible to change as organic molecules. As such, Samsung’s QLED TVs worldwide come with a 10-year burn-in warrantee** when purchased.

 

HDR: A High Dynamic Range Revolution

 

High Dynamic Range (HDR) is quite literally a game-changer for players. The technology offers greater contrast, allowing dark areas to appear darker and light areas to appear brighter. Black is monochrome no more as HDR technology provides stunning depth and range to the color, offering hues and shades never before articulated on the screen. Developers are now producing HDR games and, in many situations, being able to see shadows and silhouettes clearly, even against dark background, will give a player a significant edge over their competitors. Both Samsung’s QLED TV Q8C and CHG90 QLED Gaming Monitor support HDR technology.

 

 

Big Screen Benefits

 

At 49 inches, 32:9 super ultra-wide curved, the CHG90 QLED Gaming Monitor is pushing boundaries as the widest gaming monitor available. It gives gamers the opportunity to enjoy a completely bezel-free, fluid and fast-paced field of view, especially while exploring single-player endeavors. Gamers can play games such as The Witcher 3: Wild Hunt, Assassin’s Creed Origins, Rise of the Tomb Raider, Dota 2, League of Legends (LOL) exactly as their developers intended, further taking advantage of the immersive gaming experience that the monitor provides.

 

On the other hand, the 65” QLED TV Q8C is perfect for users who want to bring their games into the heart of the home and play against family and friends side-by-side, discovering together immersive pictures and 100% color volume. A gathering will be able to enjoy many games together, such as sports-themed games and car racing games. What’s more, PC gaming on a large TV is hassle-free, as Samsung’s QLED TV Q8C can stream from an in-home PC directly, using the Steam Link app available from the Samsung Smart Hub.

 

Gamers can rest easy knowing that QLED monitors and TVs are optimized for gaming, offering large screen game-play that gamers can enjoy either on their own or with friends. Thanks to the stunning picture quality and burn-in resistance provided by QLED displays, both the QLED TV and the QLED Gaming Monitor definitely make a strong addition to any gamer’s suite of hardware.

 

 

*MPRT of 2ms can be achieved when LED Clear Motion mode is on. MPRT is 8ms in Default mode.

**Details of the burn-in guarantee for QLED TVs vary by country, and the policy will gradually be introduced in other countries as the QLED TV is launched in each market.

Passion, Timelines, and Curiosity: Design Insights from Adobe Creative Resident Natalie Lew

When this year’s Adobe Creative Residency kicked off in May, interaction designer and recent grad Natalie Lew was ready to get to work; she just needed to figure out what, exactly, she wanted to be working on.

“Rather than tackle a single topic over the course of this year, I’m planning to approach a series of projects, testing out different research methodologies throughout,” she says. “As a recent grad, professional networking was a great place to start.” Over at Behance, Lew gives a comprehensive breakdown of how she pulled together her first project: Veet, an app that examines the future of professional networking. By approaching the issue from a millennial perspective, she was able to take an often overwhelming prospect–making meaningful career connections–and turn it into something manageable, and even personal. (Not an easy feat!)

Here, she shares with us five key lessons to kickstart a kickass UX project:

1. Identify Your Passions–Then Connect the Dots Between Them

I had been circling around a few different ideas and concepts to pursue during the residency. When I was finally ready to dive in, I had to stop and think: What am I really passionate about? I made big lists of topics I was thinking about a lot, and stuff I wanted to improve upon. A few main categories excited me the most:

  • Future technologies and what they look like. This includes AR and VR, UX for voice commands, and how different communities can and will come together.
  • How things are made, and the consequences they’ll have. How might we ensure that those future technologies be equitable and human-centered?
  • What does process look like for me? As a budding designer, I want to work on how I can develop my creative process; take ownership of it; then share the pieces that are successful (and those that aren’t!).

2. Establish a Timeline (and Daily To-Dos) to Stay Focused And Efficient

If you’re working with a client, they’ll have a deadline, and you figure out what you can do for them in that time. For the residency, I’m the client; I’m setting up parameters for myself to make sure that I’m realistic about the quality of the product I can come up, within a deadline I make myself.

I create step-by-step timelines for my own work because it’s important for me to feel like I’m in control of a project–not that the powers of the universe are just, like: “You can do whatever, whenever!” That mentality means I won’t get anything done. So I like to know what I’m doing every day; I need to wake up and say, this is what I’m going to work on. It doesn’t have to be hour-by-hour, but I should have a handle on what I want to think about and get done as if everything is a piece that fits into a larger puzzle. That makes me feel empowered. (Just remember–it’s okay to make mistakes, too!)

One of the most important things a UX designer can do is to figure out their most effective work methodology. Do you like going heads-down for four hours, and when you get up you’re done for the day? Or are you working on something all day long, with little breaks in between? How can you get the most done? Pay attention to that, and build it into your workflow.

3. Abandon Your Expectations, and Embrace Nuanced Research

I love doing research. When I started talking to millennials for this app idea, their responses defied my initial notions. I felt that networking events could sometimes feel awkward, but thought that might have just been my perception. Then everyone used the word “overwhelming” when describing their own experiences. Everyone also seemed to think of themselves as introverts in big social situations but said that one of their favorite things to do was to meet new people one-on-one. I had never heard people discuss these things with such a unified voice before.

4. Ask, Listen, and Observe With Compassion and Intent

You don’t need to talk to a ton of people to get great insights; if you can have conversations with six to 12 people, you should be able to generate really good material. Start with a list of basic, but open-ended questions–like “What do you think of professional networking?”–with additional questions that can lead to rich storytelling opportunities, like “Tell me about a memory you have about a specific networking experience.”

Remember that research activities are not only about talking; they’re about observation, too. When I had people perform the Circle of Trust activity, some would actually grimace or recoil when confronted with certain ideas. Expressions, attitudes, feelings, motions–these are all valuable.

5. Be Curious, and Make Things With People–Not At People

Always ask: How can I learn more, and show my learnings through design? Take time to be knowledgeable about what it is you’re designing, and who you’re designing for. It’s more than a slick UI and visual components.

The most thoughtful thing you can do as a designer is to really consider your role. I don’t think we’re do-all, end-all, be-all heroes; instead, we should be communicating with people who will be impacted by our work, and translating their insights into design solutions. We become the channel that their ideas flow through.

Inspired by Natalie’s approach to UX? Check out these best practices for design that makes users–and their insights, needs, and expertise–a priority:

For regular UX insights sent straight to your inbox, sign up for Adobe’s experience design newsletter! We’ll also be sharing more from Natalie–in the meantime, you can keep up with her latest news on Twitter and Behance!

Payment Revolution: How Apple and Android Are Changing How We Pay for Products Today

‘Paying for things’ is a huge business. Everyday, we spend $12 billion in the U.S. in over 200 million payment transactions, and we use our debit or credit cards for a most of that. Essentially, those cards are just little pieces of plastic with exposed numbers and a magnetic stripe interface that is vulnerable to skimmers. It’s so easy for them to be compromised, it’s no wonder people have been dreaming about replacing plastic cards for years.

Fortunately, the way we pay for things is changing–smartphones and wearables are redefining the way we pay for things. More and more, we’re seeing people pay for items in the real world with their mobile devices.

Your smartphone or smartwatch can be your ticket to a more streamlined shopping experience. In this article, I’ll outline two popular mobile payment solutions: Apple Pay and Android Pay by Google.

How They Work

If you put the two systems next to each other, you’ll notice they basically do the same thing, and even their user interfaces are similar. For both systems, users have the ability to add their debit and credit cards directly into the app by either taking a picture of the card or entering the information manually. Both Apple Pay and Android Pay utilize NFC (near field communication) technology to communicate transactions to NFC-enabled payment terminals. Paying using either technology is really simple; you don’t even need to open the app, instead you can just hold it to the terminal to pay.

Apple Pay and Android Pay work with NFC contactless payment terminals. You don’t need a WiFi or cellular connection on your device to complete a payment. Image credit: Google

While Apple Pay and Android Pay are mostly used to pay for items in the real world, many iOS and Android apps also support payment using these services.

Apple Pay and Android Pay allow simpler checkout in apps. For example, you can choose Apple Pay as a default payment option in the Uber app for iOS.

Apple Pay and Android Pay can also be used as payment methods on websites. The next time you make a purchase on the web using your iPhone/Android, check the payment methods and, if you see Apple Pay/Android Pay logo, you can pay in one click without having to create an account or fill out lengthy forms.

Services like Groupon eliminate the need to use credit or debit cards to purchase something.

Which Devices Support Them?

Which payment system you use will be down to which phone you have. Apple Pay is supported on Apple’s devices starting from iPhone 6 and up. Each transaction should be authenticated by using Touch ID or Face ID.

On the Android side, there are way more compatible phones. Pretty much every device running Android 4.4 KitKat with a NFC chip built in can support Android Pay. According to The Verge, Android Pay was already compatible with 70% of Android devices when it was released.

How Secure Are The Technologies?

As with so many new technologies today, the biggest worry about something like Apple Pay or Android Pay is security. We live in an age where security leaks are publicly reported almost all the time.

However, for both payment systems, security concerns are becoming less of an issue. Here are a few facts that show you don’t have to worry too much about security when using them:

  • Real credit card details are never stored on a device. Both Apple Pay and Android Pay don’t emulate the signal used when you make a contactless payment with your debit or credit card. Instead, they create a virtual card that’s used to make payments. If you loose your Android or iPhone there’s no need to cancel your credit card because it’s not stored on that device.
  • Credit card details are never shared during a payment transaction. Both Apple Pay and Android Pay leverage tokenization–each transaction is processed via individual random account numbers, rather than an actual credit or debit card account number. This means during payment transactions the merchant never sees real credit card numbers. In case that somebody intercepts the NFC signal, no valuable information can be stolen.

  • Both systems are able to use fingerprint scanners as an extra level of security (in fact, authentication with Touch ID or Face ID is mandatory for Apple Pay).

Touch ID on an Apple iPhone

  • Finally, payments are only sent out if your phone is working and unlocked. If it is locked and not in use, your account should be safe.

Which Banks Support Them?

All major American banks like American Express, Bank of America, Wells Fargo, BBVA Compass, Capital One, Chase, and Citi support both payment systems. You can see a full list of banks supporting Apple Pay in the US here. Check out this page to see a list of banks that support Android Pay.

Some banks (like Bank of America or Wells Fargo) go even further in supporting payment systems. They’ve installed NFC-enabled ATMs around the U.S., which allow you to access your bank account to withdraw cash using your phone.

Apple Pay and Android Pay now support card-free ATM transactions at Bank of America. Image credit: WhatWhatTech

Which Shops Accept Them?

Currently, Apple Pay is supported by 35% of retailers in the U.S. in more than 4 million locations. According to Jennifer Bailey, the head of Apple Pay, Apple expects that two thirds of the top 100 retailers will support Apple Pay this upcoming year.

As a user, you can check the point-of-sale device for the Apple Pay or Android Pay logo, or another symbol that indicates if contactless payments are accepted.

This symbol means contactless payments are accepted by the device.

How Many People Use It?

According to a report by Juniper Research, the number of Apple Pay and Android Pay users will be 86 and 24 million by the end of the year, respectively. While Juniper expects Apple to dominate the contactless payment market over the next four years, it’s clear that Android Pay is also seeing big increases in usage. Taking the number of devices into account, it could surpass Apple Pay usage within a few years.

Image credits: Juniper Research

Although both platforms have grown since their launch dates, the percentage of users who use Apple Pay or Android Pay is still pretty low. According to Crone Consulting, only 4% of Apple’s users with Apple Pay-enabled iPhones use the service. As for Android Pay, a paltry 1% of users with compatible devices use the service.

The Problem with Mobile Payments

Mobile payments have a few technical issues that will hopefully be solved in the near future:

  • Bank support. While the total number of banks supported by the payment systems is impressive, not all banks are supported and you may find your own bank is missing from the list.
  • Merchant support. Considering both Android Pay and Apple Pay are still relatively new, not all places actually use these technologies. While users may get a new phone every two years or so, merchants don’t replace their POS infrastructure nearly as often. The main problem mobile payment systems face in the U.S. and elsewhere is limited support for NFC payments.
  • Unnecessary actions during payment. Some point-of-sale terminals ask for a PIN or for a signature, even after the ‘done’ chime has sounded. While PINs may be considered a ‘security’ precaution, a signature doesn’t make sense. You can sign anything and it doesn’t matter. This negates one of the best features of Apple Pay and Android Pay: speed.

Mobile payment brings up another problem that doesn’t have much to do with technology–spendaholics love it. Less friction during the process of payment makes it much easier to spend extra money.

Conclusion

It’s clear that mobile payments are taking us one step closer to a wallet-free future. Both Apple Pay and Android Pay are great examples of hardware and software working in tandem. But no matter which service you choose, don’t recycle your old-fashioned leather wallet just yet. The mobile payments revolution is just getting started, and it’ll take some time before most merchants are ready to support mobile payment technology.

Samsung Smart TVs Enhances Home Entertainment with New Berliner Philharmoniker App

 

Concert performances from the Berliner Philharmoniker are now available to more people than ever thanks to an updated version of the Digital Concert Hall app on Samsung Smart TVs.

 

The Digital Concert Hall app on Samsung Smart TVs will be updated in time for the Berlin Philharmoniker orchestra’s Asia Tour. The updated version of the app offers an improved user experience and an enhanced feature set, which includes curated playlists and work overviews of concerts. Additionally, the Digital Concert Hall will be available in the Korean language for the first time. It will be available for 2015-2017 Smart TVs.

 

 

Enjoy the Acclaimed Live Orchestra on TVs

 

Berliner Philharmoniker is the most prolifically recorded orchestra in the world, with more than 100 years of recording history. They are named in many polls as the world’s best orchestra, and are known as the first to record a CD back in 1980. They are no strangers then to innovation.

 

Since 2008, Berliner Philharmoniker has been broadcasting live streams and video archive recordings of its concerts through its Digital Concert Hall app. With one of the largest collections of recordings, the orchestra boasts more than 1,500 audio and visually recorded works in its Digital Concert Hall, including 50 live seasonal concerts, interviews, concert introductions, documentaries, artist portraits and educational program concerts. In addition to several improvements in its design and user interface, the new app is now available in German, English, Spanish, Japanese and Korean languages. New users of Samsung Smart TV can enjoy the Digital Concert Hall app with a 30-day trial.

 

“The new Digital Concert Hall app is a prime example of distributing excellent content across new digital media platforms. At Samsung, our goal is to optimally reproduce the image and sound of such high-quality productions with our home entertainment devices and Smart TVs,” said Sangsook Han, Vice President of the Visual Display Business at Samsung Electronics.

 

 

Seamless Discovery for Home Entertainment

 

Samsung Electronics has been using new music and gaming functionality to further enhance the features of its Smart TVs. The new apps help users unlock new levels of entertainment, which includes everything from the finest classical music concerts, live music discovery, and big screen gaming direct from a PC.

 

Some of the most recent activities Samsung has carried out provide increased value to Smart TV customers include a partnership with popular music discovery service Shazam. Shazam grants Smart TV viewers the ability to identify songs that are playing in TV shows or films. Now, thanks to its integration with Samsung Smart TVs, it makes it easier than ever for users to discover and enjoy music.

 

Samsung Smart TVs are also offering a new level of connectivity for gamers to enjoy. Samsung recently announced the expansion of Valve’s Steam Link to all of its 2016 and 2017 Smart TVs*. Instead of connecting Steam Link hardware to the TV, this technology enables users to stream their favorite games from their in-home PC directly to their Samsung Smart TVs via an app in the Samsung Smart Hub.

 

The thing that customers love about Samsung Smart TVs is the fact that they continue to get smarter and better connected. Samsung is always introducing new integrations and services which mean that you’ll keep discovering new ways to enjoy your TV.

 

*The launching date of the music and game apps vary by country and region.

Doubling down in Japan

With Ruth, our CFO, visiting the site of our new Tokyo office today.

In 2001, when Google was just three years old, we opened our first office outside the U.S. That office was right here in Tokyo. Before Chrome, Gmail and YouTube, there was Google Japan.

16 years later, Google has grown quite a bit—we now have offices in over 150 cities, spanning nearly 60 countries—and Google Japan has grown as well, to 1,300 Googlers strong.

Today, I’m excited to announce the next phase of our long term investment and presence in Japan: a new office in Shibuya, Tokyo, opening in 2019, that will allow us to double the size of our team here. We are also announcing an initiative, working with Minna No Code, to help bring computer science education to more than two million students across Japan.

Doubling our presence in Japan means growing our strong engineering teams here. When an earthquake hit Tohoku in 2011, members of these teams worked quickly to launch tools like Person Finder that we still use when disasters strike around the world. And they continue to work on and improve products like Search and Maps. It also means growing our teams who work every day to help Japanese companies grow. Their work, and the tools we provide, helped Japanese businesses increase their revenue by more than $6.7 billion in 2015 alone.

We are working on some exciting ideas around the design of the new office that will let us open our doors to the community, and will share more details as plans progress.

Artist Impression of Japan Office
Here are some early artist’s impressions of how we might design some of the spaces

Finally, this is a sign of our commitment to long-term investment in Japan. It’s about creating the future with Japan’s innovators of today and those from the next generation. That’s why, through Google.org, we are partnering with Minna No Code to train thousands of teachers in computer science who will go on to teach more than two million Japanese students. This initiative is in line with Japan’s plans to ensure that all Japanese students receive a computer science education by 2020.


We can’t wait to start the next phase of our journey in Japan and to see the future that we can create together.

Announcing New Tools for the Creator Community

By Chris Hatfield, Product Manager, Video

Creators around the world are sharing their videos on Facebook to build a community around their passion — whether their passion is comedy sketches, their favorite recipes, or even knitting sweaters. On Facebook, creators can connect with more than two billion potential fans and collaborators, get to know their community, talk directly to fans with Live, and monetize with products like branded content.

We understand that creators have specific needs, and we’re committed to helping them on their journey as they grow and find their community. As part of this commitment, we’re announcing two initiatives to help creators unleash their creativity: an app that helps creators manage their presence on Facebook and a central destination where creators can get the resources they need to improve and grow.

Facebook Creator App

The Facebook Creator app is a one stop shop for creators of all kinds, to help take their passions to the next level. With the app, creators can easily create original video, go live with exclusive features, and connect with their community on Facebook — all from their pocket.

If you’re a creator, there are a range of features for you:

  • Live Creative Kit: Access exclusive tools that make it easy to create live broadcasts with a personalized and fun feel. Creators can add intros as openers to their live broadcasts, outros that conclude them, custom live stickers that viewers can use to interact, and graphic frames to create a consistent brand.
  • Community Tab: Connect with fans and collaborators with a unified inbox, which centralizes comments from Facebook and Instagram, and messages from Messenger.
  • Camera & Stories: Use fun camera effects and frames and easily crosspost content to other platforms. Creators can also access Facebook Stories to engage with their fans.
  • Insights: Easily access metrics to inform content creation, including analytics about your Page, videos and fans.

If you’re a creator making a show for Watch, you can also log into the app as your Show Page to access the features above. We’re currently testing shows with a set of creators and plan to roll out more broadly in the future.

We’re launching the Facebook Creator app globally today on iOS, and will be rolling it out to Android users in the coming months. The app is open to individuals on Pages or profiles, and you will be able to download it in the Apple App Store today. We will be gathering feedback and iterating on the app to create the best experience for creators.

A New Website for Creators

Facebook for Creators is a new website where creators can find resources and tips on how to create great videos, connect with fans, and grow on Facebook.

If you’re a creator, with Facebook for Creators you can:

  • Learn skills and techniques to make your content shine
  • Find answers to common creator-specific questions
  • Join the community to be considered for early access to new features and tools

Creators are invited to join the Facebook for Creators community here.

We are excited to see how creators use these tools to share video, interact with their followers, and grow their community on Facebook. We are just getting started, and look forward to continuing to work collaboratively with creators to make their experience on Facebook even better.

Reflecting on a year’s worth of Chrome security improvements

In the next few weeks, you’ll probably be spending lots of time online buying gifts for your friends, family and “extended family” (your dog, duh). And as always, you want to do so securely. Picking the perfect present is hard enough; you shouldn’t have to worry about staying safe while you’re shopping.

Security has always been a top priority for Chrome, and this year we made a bunch of improvements to help keep your information even safer, and encourage sites across the web to become more secure as well. We’re giving you a rundown of those upgrades today, so that you can concentrate on buying the warmest new slippers for your dad or the perfect new holiday sweater for your dog in the next few weeks.


More protection from dangerous and deceptive sites


For years, Google Safe Browsing has scanned the web looking for potential dangers—like sites with malware or phishing schemes that try to steal your personal information—and warned users to steer clear. This year, we announced that Safe Browsing protects more than 3 billion devices, and in Chrome specifically, shows 260 million warnings before users can visit dangerous sites every month.
chromeprotects_a (2).png

We’re constantly working to improve Safe Browsing and we made really encouraging progress this year, particularly with mobile devices. Safe Browsing powers the warnings we now show in Gmail’s Android and iOS mobile apps after a user clicks a link to a phishing site. We brought Safe Browsing to Android WebView (which Android apps sometimes use to open web content) in Android Oreo, so even web browsing inside other apps is safer. We also brought the new mobile-optimized Safe Browsing protocol to Chrome, which cuts 80 percent of the data used by Safe Browsing and helps Chrome stay lean.


In case you do download a nastygram, this year we’ve also redesigned and upgraded the Chrome Cleanup Tool with technology from IT company ESET. Chrome will alert you if we detect unwanted software, to remove the software and get you back in good hands.


Making the web safer, for everyone


Our security work helps protect Chrome users, but we’ve also pursued projects to help secure the web as a whole. Last year, we announced that we would mark sites that are not encrypted (i.e., served over HTTP) as “not secure” in Chrome. Since then, we’ve seen a marked increase in HTTPS usage on the web, especially with some of the web’s top sites:
saferweb (2).png

If you’re researching gifts at a coffee shop or airport, you might be connecting to unfamiliar Wi-Fi which could be risky if the sites you’re visiting are not using the secure HTTPS protocol. With HTTPS, you can rest assured that the person sitting next to you can’t see or meddle with everything you’re doing on the Wi-Fi network. HTTPS ensures your connection is encrypted and your data is safe from eavesdroppers regardless of which Wi-Fi network you’re on.


An even stronger sandbox


Chrome has never relied on just one protection to secure your data. We use a layered approach with many different safeguards, including a sandbox—a feature that isolates different tabs in your browser so that if there’s a problem with one, it won’t affect the others. In the past year, we’ve added an additional sandbox layer to Chrome on Android and improved Chrome’s sandboxing on Windows and Android WebView.


So, if you’ve entered your credit card to purchase doggy nail polish in one Chrome tab, and you’ve inadvertently loaded a misbehaving or malicious site in another tab the sandbox will isolate that bad tab, and your credit card details will be protected.


Improving our browser warnings to keep you even safer


It should always be easy to know if you might be in danger online, and what you can do to get back to safety. Chrome communicates these risks in a variety of different ways, from a green lock for a secure HTTPS connection, to a red triangle warning if an attacker might be trying to steal your information.


By applying insights from new research that we published this year, we were able to improve or remove 25 percent of all HTTPS warnings Chrome users see. These improvements mean fewer false alarms, so you see warnings only when you really need them.
browser warnings_chrome.png

Some of Chrome’s HTTPS warnings (on the left) are actually caused by reasons unrelated to security—in this case, the user's clock was set to the wrong time. We’ve made the warnings more precise (on the right) to better explain what’s going on and how to fix it.

Unfortunately, our research didn’t help users avoid dog-grooming dangers. This is a very challenging problem that requires further analysis.


A history of strong security


Security has been a core pillar of Chrome since the very beginning. We’re always tracking our own progress, but outside perspectives are a key component of strong protections too.


The security research community has been key to strengthening Chrome security. We are extremely appreciative of their work—their reports help keep our users safer. We’ve given $4.2 million to researchers through our Vulnerability Reward Program since it launched in 2010.
paidresearch (2).png

Of course, we’re also happy when aren’t able to find security issues. At Pwn2Own 2017, an industry event where security professionals come together to hack browsers, Chrome remained standing while other browsers were successfully exploited.


Zooming out, we worked with two top-tier security firms to independently assess Chrome’s overall security across the range of areas that are important to keep users safe. Their whitepapers found, for example, that Chrome warns users about more phishing than other major browsers, Chrome patches security vulnerabilities faster than other major browsers, and “security restrictions are best enforced in Google Chrome.” We won’t rest on these laurels, and we will never stop improving Chrome’s security protections.

Combined (2).png

So, whether you’re shopping for a new computer, concert tickets, or some perfume for your pooch, rest assured: Chrome will secure your data with the best protections on the planet.

Getting your Android app ready for Autofill

Posted by Wojtek Kalicinski, Android Developer Advocate, Akshay Kannan, Product Manager for Android Authentication, and Felipe Leme, Software Engineer on Android Frameworks

Starting in Oreo, Autofill makes it easy for users to provide credit cards, logins, addresses, and other information to apps. Forms in your apps can now be filled automatically, and your users no longer have to remember complicated passwords or type the same bits of information more than once.

Users can choose from multiple Autofill services (similar to keyboards today). By default, we include Autofill with Google, but users can also select any third party Autofill app of their choice. Users can manage this from Settings->System->Languages>Advanced->Autofill service.

What's available today

Today, Autofill with Google supports filing credit cards, addresses, logins, names, and phone numbers. When logging in or creating an account for the first time, Autofill also allows users to save the new credentials to their account. If you use WebViews in your app, which many apps do for logins and other screens, your users can now also benefit from Autofill support, as long as they have Chrome 61 or later installed.

The Autofill API is open for any developer to implement a service. We are actively working with 1Password, Dashlane, Keeper, and LastPass to help them with their implementations and will be working with other password managers shortly. We are also creating a new curated collection on the Play Store, which the "Add service" button in Settings will link to. If you are a password manager developer and would like us to review your app, please get in touch.

What you need to do as a developer

As an app developer, there are a few simple things you can do to take advantage of this new functionality and make sure that it works in your apps:

Test your app and annotate your views if needed

In many cases, Autofill may work in your app without any effort. But to ensure consistent behavior, we recommend providing explicit hints to tell the framework about the contents of your field. You can do this using either the android:autofillHints attribute or the setAutofillHints() method.

Similarly, with WebViews in your apps, you can use HTML Autocomplete Attributes to provide hints about fields. Autofill will work in WebViews as long as you have Chrome 61 or later installed on your device. Even if your app is using custom views, you can also define the metadata that allows autofill to work.

For views where Autofill does not make sense, such as a Captcha or a message compose box, you can explicitly mark the view as IMPORTANT_FOR_AUTOFILL_NO (or IMPORTANT_FOR_AUTOFILL_NO_EXCLUDE_DESCENDANTS in the root of a view hierarchy). Use this field responsibly, and remember that users can always bypass this by long pressing an EditText and selecting "Autofill" in the overflow menu.

Affiliate your website and mobile app

Autofill with Google can seamlessly share logins across websites and mobile apps ‒ passwords saved through Chrome can also be provided to native apps. But in order for this to work, as an app developer, you must explicitly declare the association between your website with your mobile app. This involves 2 steps:

Step 1: Host a JSON file at yourdomain.com/.well-known/assetlinks.json

If you've used technologies like App Links or Google Smart Lock before, you might have heard about the Digital Asset Links (DAL) file. It's a JSON file placed under a well known location in your website that lets you make public, verifiable statements about other apps or websites.

You should follow the Smart Lock for Passwords guide for information about how to create and host the DAL file correctly on your server. Even though Smart Lock is a more advanced way of signing users into your app, our Autofill service uses the same infrastructure to verify app-website associations. What's more, because DAL files are public, third-party Autofill service developers can also use the association information to secure their implementations.

Step 2: Update your App's Manifest with the same information

Once again, follow the Smart Lock for Passwords guide to do this, under "Declare the association in the Android app."

You'll need to update your app's manifest file with an asset_statements resource, which links to the URL where your assetlinks.json file is hosted. Once that's done, you'll need to submit your updated app to the Play Store, and fill out the Affiliation Submission Form for the association to go live.

When using Android Studio 3.0, the App Links Assistant can generate all of this for you. When you open the DAL generator tool (Tools -> App Links Assistant -> Open Digital Asset Links File Generator), simply make sure you enable the new checkbox labeled "Support sharing credentials between the app and website".

Then, click on "Generate Digital Asset Links file", and copy the preview content to the DAL file hosted on your server and in your app. Please remember to verify that the selected domain names and certificates are correct.

Future work

It's still very early days for Autofill in Android. We are continuing to make some major investments going forward to improve the experience, whether you use Autofill with Google or a third party password manager.

Some of our key areas of investment include:

  1. Autofill with Google: We want to provide a great experience out of the box, so we include Autofill with Google with all Oreo devices. We're constantly improving our field detection and data quality, as well as expanding our support for saving more types of data.
  2. WebView support: We introduced initial support for filling WebViews in Chrome 61, and we'll be continuing to test, harden, and make improvements to this integration over time, so if your app uses WebViews you'll still be able to benefit from this functionality.
  3. Third party app support: We are working with the ecosystem to make sure that apps work as intended with the Autofill framework. We urge you as developers to give your app a spin on Android Oreo and make sure that things work as expected with Autofill enabled. For more info, see our full documentation on the Autofill Framework.

If you encounter any issues or have any suggestions for how we can make this better for you, please send us feedback.

With E-rate Funding, Magic Happens at Ascend Public Charter Schools

Four sites with four separate networks. No VPN capabilities, no streamlined network management, and no IT budget. Limited wireless access, inhibiting student learning and staff collaboration. Only a five person IT team. Starting to sound like a nightmare? This was the reality for Ascend Public Charter Schools, located in Brooklyn. Emeka Ibekweh, Managing Director of […]

Go behind the scenes with Austin City Limits: Backstage

Austin City Limits” needs little introduction. It’s the longest-running television music program in history, it’s helped launch the careers of iconic musicians like Willie Nelson (featured in the very first episode back in 1974), and it’s even enshrined in the Rock & Roll Hall of Fame. But, for all its history, the closest you can get is either in the crowd, or in front of your TV screen. We wanted to go further, and pay tribute to this legendary show’s 43rd season and its impact on pop culture. So we’re releasing a new virtual reality video series called “Austin City Limits: Backstage” in partnership with SubVRsive Media.

ACLZBB

“ACL Backstage” lets you explore the untold stories of the crew, the city, the fans and, of course, the musicians who make Austin City Limits possible—all in virtual reality. Venture backstage at Austin’s legendary Moody Theater to hear stories from some of your favorite artists. Then, watch and listen up close as they take the stage and play their hits under the bright lights. After that, you can take a whirlwind tour through the city’s thriving local music scene, where you’ll hear up-and-coming stars who might make it big one day.

ACLHero

ACL Backstage will have 10 episodes, each featuring a different artist or group.The first three are available now, with more coming soon:

  • Ed​ ​Sheeran” This is Ed Sheeran’s second ACL Live performance, and since he last took the stage in 2014, his career has skyrocketed. Now, with multiple Grammy wins and three platinum records under his belt, he reflects on his rise to the top of the charts. His passion for the music and his fans shine through in this episode.

  • Zac​ ​Brown​ ​Band​” Three-time Grammy Award-winning multi-platinum artists Zac Brown Band make a stop on their 2017 Welcome Home Tour to grace the ACL stage for the very first time. Sit backstage with the band as they chat about ACL’s rich history, and join them onstage for their lively show.

  • Unsung Heroes” Hear ACL stories directly from crew members, many of whom have been working the show for decades. They explain the ethos of Austin City Limits and why it remains so popular among musicians.

Use your Cardboard or Google Daydream View to check out all the videos on the ACL YouTube Channel. Kick back, hang with your favorite artists, and rock out.
Scroll Up