Latest Tech Feeds to Keep You Updated…

Testing Subscriptions Support in Instant Articles

By Campbell Brown, Head of News Partnerships, Alex Hardiman, Head of News Product, and Sameera Salari, Product Manager

Over the next few weeks, we’re launching a test to support news subscription models in Instant Articles in partnership with a small group of publishers across the U.S. and Europe. This initial test will roll out on Android devices, and we hope to expand it soon.

This is a direct result of the work we’re doing through the Facebook Journalism Project. We’re listening to news publishers all over the world to better understand their needs and goals, and collaborating more closely on the development of new products from the beginning of the process.

Earlier this year, many publishers identified subscriptions as a top priority, so we worked with a diverse group of partners to design, refine, and develop a test suited for a variety of premium news models. We also heard from publishers that maintaining control over pricing, offers, subscriber relationships, and 100% of the revenue are critical to their businesses, and this test is designed to do that.

The following partners have been integral to building this product and we look forward to learning more together as the test gets underway: Bild, The Boston Globe, The Economist, Hearst (The Houston Chronicle and The San Francisco Chronicle), La Repubblica, Le Parisien, Spiegel, The Telegraph, tronc (The Baltimore Sun, The Los Angeles Times, and The San Diego Union-Tribune), and The Washington Post.


Here’s how the test will work:

  • We’ll support a paywall in Instant Articles for both metered models (we’ll start with a uniform meter at 10 articles and test variations from there) and freemium models (the publisher controls which articles are locked).
  • When someone who isn’t yet a subscriber to a publication encounters a paywall within Instant Articles, they will be prompted to subscribe for full access to that publisher’s content.
  • If that person subscribes, the transaction will take place on the publisher’s website. The publisher will process the payment directly and keep 100% of the revenue.
  • The publisher and subscriber relationship will work the same way it does on their own sites today where the publisher has direct access and full control, including setting pricing and owning subscriber data.
  • These subscriptions include full access to a publisher’s site and apps.
  • Similarly, someone who is already a subscriber to a publication in the test can authenticate that subscription within Instant Articles in order to get full access to that publisher’s articles.

Furthermore, we’ll be testing other units to help publishers drive additional subscriptions before a person might hit the paywall. These units include a Subscribe Call-to-Action Unit (CTA), which will appear in-line in Instant Articles similar to other CTAs like Email Sign-Up or App Install. We’ll also test a “Subscribe” button that will replace the “Like” button on the top right corner of an article. We’ll continue to collaborate with publishers to refine these units and build out new ones.

As with many products we build at Facebook, we’ll observe how people respond to this new experience, and we’ll be working with these partners to analyze, learn, and iterate over time. We hope to expand the test to additional partners in the future.

Looking Ahead

We’re continuing to invest in Instant Articles because data has shown that people prefer the faster-loading mobile reading experience, and that translates to more traffic and engagement for publishers.

For over a year, we’ve been developing tools in Instant Articles to help publishers build deeper relationships with their audiences, including Email Sign-Up and App Install CTAs. For many publishers, the goal is that this ultimately leads to a paid subscription, and we want to help facilitate that relationship within Instant Articles and on Facebook. Publishers not using Instant Articles already have the ability to implement paywalls and subscription models on the mobile web.

Concurrently, we’ll remain focused on driving continual improvements in ad performance in Instant Articles. This year alone, the average revenue per page view has increased over 50%, and Instant Articles pays out more than $1 million per day to publishers via Audience Network.

Over time, we’ll continue investing in new ways to enable publishers’ subscription businesses — including working with publishers to remove friction from the conversion flow to subscribe, leveraging data to better target content and offers to likely and existing subscribers, and improving our marketing tools to make them better suited for publishers’ needs. We’re looking forward to working with our partners to help support an important business model for the news industry.

Delivering better government services at lower costs with Chrome

Editor’s note: Today’s post comes from Vijay Badal, Director of Application Services of DOTComm. Founded in 2003, DOTComm provides centralized IT support and consulting for 70 government agencies in the city of Omaha and Douglas County, NE. DOTComm uses Chrome browser and G Suite to improve employee productivity and mobility and cut IT costs.

At DOTComm, our employees provide technical support for more than 5,000 government workers throughout Omaha and Douglas County. Because these workers are spread across 120 different locations, our employees need access to the tools they need to do their jobs whether they’re in the office or on site with our customers. Several years ago, we realized the legacy systems we were using were getting in the way.

When employees had to travel to provide technical support for the government agencies we serve, they didn’t have mobile access to important documents, or the ability to share and send files back to the office, such as videos that outlined technical issues. In addition, hardware and licensing were costly, and inflexible productivity applications were making it difficult for employees to collaborate or work from the road. Plus, we needed half a dozen employees just to maintain our infrastructure!

To solve these challenges, we turned to Chrome and G Suite. Chrome is fast, secure and gives our staff access to thousands of useful extensions. It’s also allowed us to standardize across our desktop and mobile devices. G Suite has helped us cut hardware costs and improve collaboration and mobility. With Chrome and G Suite, we no longer pay thousands of dollars in annual licensing fees, and we’ve reduced the number of people managing infrastructure from six to one, freeing up the other five people to work on different tasks.

Chrome’s extensions have been big productivity boosters. One extension syncs the staffs’ Google calendars with their Salesforce calendars. Previously, employees had to check two separate apps and cross-reference two separate calendars. Now they only need to check one. Another extension gives staff mobile access to Google Docs and Google Sheets. This means they can work nearly anywhere. When they’re out of the office, or in the field, they can create and share files on any device they need.

As an IT department, we’re particularly pleased with the security and other IT benefits we get with Google. Chrome has built-in malware and phishing protection, and we use the G Suite admin console to ensure all user downloads are stored on the same network drive so they can be checked for malware. The G Suite admin console lets us control Chrome settings for employees, including adding extensions on whitelists so employees can use them, pushing recommended extensions to users, and rolling out Chrome updates on a scheduled timeframe. That’s made our IT administrators’ lives much easier and has been a huge timesaver. And because we centrally manage the rollout of extensions for new employees, individual city and departments no longer need to have a dedicated IT person working on new hire application orientation. So we save time and money with each new hire.

Meanwhile, the number of help tickets for IT support has plummeted, from 30 a day to one or two. For example, we no longer have to deal with local archive files, which means our staff spends less time troubleshooting and the government employees we serve don’t waste time wrestling with unfamiliar technology. Productivity has increased as well. For example, City Police, City Fire, and County Health departments all use shared Google Sheets within their individual precincts for shift change management. This allows them to roll over shift changes swiftly and efficiently, without missing any critical ongoing task assignments.

Chrome browser and G Suite have allowed us to offer more secure and productive IT services to all City of Omaha and Douglas County employees, who are then able to better serve citizens. DOTComm and the City of Omaha were recently honored as one of "Top 10 Cities" by the Center for Digital Government in its Digital Cities Survey 2016, which recognizes cities that use technology to improve citizen services, enhance transparency and encourage citizen engagement. This marked the first time the City of Omaha made the list—but I predict it won’t be the last now that we’re using Chrome browser and G Suite.

Google Home Mini has arrived—here’s what you can do with it

A few weeks ago we unveiled Google Home Mini, the newest addition to the Google Home family. About the size of a donut, it has all the smarts of the Google Assistant and gives you hands-free help in any room of your house. Starting today, you can grab it online from the Google Store or online or on shelves of Best Buy, Walmart, Target and other stores.

21_BP PRODUCT_GRC_17118_17_07_17_GGL_21_22_23_0847_Main_QC_RGB_LOWRES.jpg

Just start with “Hey Google” to get answers from your Google Assistant, tackle your day, enjoy music or TV shows, and control your compatible smart home devices. And with Voice Match, the Assistant can tell your voice from others—up to six people can get personal assistance on each device.

Here are six fun things you can do with your Mini:

  1. Find my phone: When you lose your phone in the couch cushions, your Assistant can find it for you. “Hey Google, find my phone” will ring your Android phone (even if it’s on silent) or your iPhone.
  2. Set a sleep timer: Fall asleep to the sweet sounds of your favorite music or podcast by saying, “Hey Google, set a sleep timer for 30 minutes.”
  3. Play news by voice on your TV: Stay on top of current events with YouTube news playlists from sources like ABC, Fox and NBC. With a Chromecast-connected TV, you can ask say: “Hey Google, play the news on my TV” or “Ok Google, play sports news on my TV.”
  4. Turn the TV on and off: With Google Home, Chromecast, and a compatible TV you can just say “Hey Google, turn off the TV.”
  5. Enable night mode: In night mode, Mini’s lights dim and the volume lowers so that you you don’t disturb others in your household when it’s late (or early).
  6. Set a default TV or speaker: Choose a Chromecast-connected TV to be your default screen, so you don’t need to mention the device's name in your voice command. When you say “Play yoga videos,” they’ll play on the TV you’ve set as the default. It works the same way for speakers connected to Chromecast Audio—you can designate a group of speakers that cover several rooms (“first floor,” for example) as the default. Then say “Hey Google, play workout playlist” and it will automatically start playing on that group of speakers.

You can start using these features today with any Google Home or Google Home Mini—and stay tuned for lots more to come!

From paw prints to a digital footprint: a tailor shop attracts new customers

A chubby French Bulldog keeps watch in front of a vintage-looking tailor shop in New York City’s SoHo neighborhood. Meet Bruno, the face of Village Tailor and Cleaners. Vince, the shop’s owner, immigrated to the U.S. from Italy when he was just 18 years old, establishing Village Tailor in 1977. Today, his family-run business has grown into three locations and is best known for its skilled leather and suede alterations. Inside the shop, a wall covered in autographed photos of celebrity customers—Celine Dion, Marc Anthony, Elton John, and others—is a testament to the iconic quality of Vince's work.

photo 1.jpg
Vince and Bruno outside the shop.

While Bruno had been doing a wonderful job bringing in passersby, Vince knew he needed a way to stand out from the many tailoring shops in SoHo and reach more customers.

Vince noticed that most of his customers were walking in with a bag of clothes in one hand, and researching local businesses on their cell phone with the other. So, he decided to get his business online. He saw it as similar to Bruno sitting out front: their online presence could spark curiosity, help them stand out, and invite in new customers.

photo 2.jpg

Bruno is on the lookout for new customers ... and treats.

He set up Village Tailor's Google listing, so that he could edit how his business appears when people find it on Google Search and Maps. He added photos to his listing, posted updates about his skilled alterations, and used Google website builder to create a free high-quality website from his phone in less than 10 minutes. Now, when he asks new customers how they found his shop, they often mention Google.

Village Tailor & Cleaner v2.png

Having an online presence not only helped Vince reach new customers, but it allowed him to build relationships with his existing customers by responding to reviews. Knowing that people trust online reviews as much as personal recommendations, reviews are an opportunity to adapt his business to customers’ needs. The results have been great for Village Tailor: within weeks of getting online, Vince noticed they were bringing in on average five more customers per week. After three months, that number increased to 15 per week, representing a 30% revenue increase per year for Vince. 

The store’s early success with Google My Business inspired Vince to try AdWords, advertising to potential customers searching on Google for keywords related to tailoring. Since customers raved about the leather and suede work in Village Tailor’s Google reviews, Vince focused on those services in his online ads which brought in even more revenue. That meant he could hire more tailors and invest in new equipment to keep up with the long lines of customers. Now, while Bruno will always have a place in front of Village Tailor, Google brings in most of their customers. Sorry Bruno!

photo 3.jpg

Father and son: two generations of excellence in alterations.

Today, Vince’s son Vincent Jr. manages Vince's Village Cobbler, the shoe repair shop next door. Building on the family business’s tradition of excellent craftsmanship in shoes and leather goods, he continues to develop Village Cobbler's online presence with an e-commerce website that offers shipping all over the U.S. He also plans to find new customers with Google My Business and Google AdWords, just like his father has, to keep the family business growing. 

Unlocking the New Reality


Samsung has a long history as a leading pioneer in developing innovative technology that delivers incredible experiences and immerses users in amazing new realities. By combining our expertise in hardware design and engineering with the incredible advancements of our software, we are driving the digital future and creating previously unimagined opportunities for our developers and consumers alike.


The way we experience the world around us has changed drastically over the course of just a few short years. The rise of mobile technology means we are no longer confined to our physical world. Augmented reality (AR) and virtual reality (VR) have added new layers of entertainment, information and education to our lives in ways that we have never experienced before.


And yet, it is still early days. At Samsung, we believe the democratization of this technology will foster a creative environment for developers to create amazing new applications and experiences. As these technologies move to the mainstream, it’s up to the industry leaders to adopt a collaborative and open approach to help AR and VR flourish.



Expanding our Worlds

Samsung introduced Gear VR nearly three years ago. Today, its rapidly expanding ecosystem features more than 1,000 apps, 10,000 high-quality videos through the premium 360-degree immersive Samsung VR service and noteworthy collaborations with top content creators and media brands. As a leader in virtual reality entertainment and devices, Samsung is already at the helm of building the technology for a next-level VR experience, changing how people use their smartphones and experience the world around them.


As the appetite for VR contents continues to grow, so has Samsung’s spirit of innovation. At SDC 2017 we showcased our latest VR solutions to help educate developers on the possibilities that exist when building within Samsung’s VR ecosystem. Samsung has developed a range of leading VR services including:



  • Samsung VR, a premium video service offering the best in 360 immersive content taking you anywhere from the cockpit of an airplane, to the depths of the ocean or the far reaches of the universe with your Galaxy phone and a Gear VR.
  • Samsung Internet VR, a Gear VR-optimized browser that allows users to surf the web and enjoy videos and photos on a large, virtual screen creating an immersive experience as if users were at the theater.
  • Samsung PhoneCast VR, a first-of-its-kind application that translates 2D apps into 3D VR through mirroring for improved VR playtime providing an entirely new mobile experience.
  • VRB Foto, a fun and social 360 photo sharing solution developed as part of Mobile Platform and Solutions lab of Samsung Research America, which allows users to create and share fun effects on top of 360 photos shot using cameras like Gear 360.
  • Black River Studios, a division of Samsung R&D Institute of Amazonia (SIDIA) in Brazil, produces titles for Gear VR and Galaxy devices including Angest, Rock & Rails, and Finding Monsters perfectly suited for the upcoming HMD Odyssey.
  • Samsung Gear VR Framework, an open source VR rendering engine with a Java interface for developing mobile VR apps offering a simple and familiar SDK for traditional Android developers without having to know multiple underlying VR SDKs.


Developers can pair these services with leading hardware like the Gear 360, as well as the recently released 360 Round – a new camera featuring 17 lenses able to capture a fully horizontal 360-degree sphere for developing and streaming high-quality 3D content for VR. Additionally, the new Samsung Gear 360 SDK now allows developers to build experiences that control Gear 360 cameras directly from an app while new software updates simplify how photos are shared with Samsung cloud integration. With these world-class hardware and software combinations, developers can create new digital environments that challenge our thinking of what’s possible while driving excitement and ultimately adoption of these new technologies.


As these technologies advance, we at Samsung believe that it’s important for the industry to provide greater access to this rapidly expanding market through an open approach to partnerships and close collaborations that drive valuable experiences for consumers. Our recent partnerships with top content creators like The New York Times gives our customers access to professional VR content while integrations with Microsoft have led to new innovations like the HMD Odyssey headset, considered the most immersive Windows mixed reality headset available. With dual 3.5-inch AMOLED displays, 110-degree field of view, and 360-degree Spatial Sound, the Samsung HMD Odyssey promises a journey to a world of games, videos, and socializing that knows no limits.



Delivering the Promise of AR

Samsung is known for creating technology that breaks the boundaries of what’s currently possible. Augmented reality has introduced a new experience by bridging our digital and physical worlds. During the Samsung Developer Conference (SDC 2017), we shared our vision for the rapidly evolving AR landscape through our partnership with Google to extend the ARCore platform to more Galaxy devices (S8, S8+ and Note8) bringing the best AR experiences to our users. While this is driven by the strengths of our mobile devices, we believe the future of AR will extend beyond the smartphone as we apply advances in machine learning and computer vision to move from recording the world to understanding it.


Consider our Galaxy series, which has redefined standards for mobile cameras with dual-pixel sensors and advanced optics. By adding machine learning and computer vision to the equation, we have transformed the camera into a means of comprehending and contextualizing the world quickly and accurately recognizing the people, places and things in front of them.


That’s where Bixby Vision comes in. We introduced Bixby Vision earlier this year, further expanding our AR capabilities. Bixby Vision offers a new way to understand the world, seamlessly integrated into your smartphone camera. It provides a better understanding of the world through objects, scenes and text. Using the camera app and photo gallery, Bixby Vision uses a hybrid deep learning system that allows it to run on both the device and in the cloud, so it can better identify objects and provide new ways of understanding and engaging with the world around you.


As Samsung continues to build innovative applications and expand the AR capabilities of our devices, we envision a future in which digital experiences like AR break through to wide consumer adoption, providing users with a powerful new experience in their daily lives. Imagine the possibilities beyond gaming and entertainment. Municipalities can install digital signposts which overlay directions for visiting tourists. Businesses have the potential to train new employees on safety procedures for new machines and equipment with virtual guides. The opportunities are endless.



Growing the Ecosystem

While today we experience AR on our phones, at Samsung, we believe that tomorrow, this will go beyond smartphones. To ensure such innovations continue to drive the growth of VR and add to the rapid advancements in AR, Samsung is committed to growing its community of developers and collaborators to expand the presence of new reality experiences in our daily lives. By doing so, we will not only open new business opportunities for our partners, but also provide greater access for all to a rapidly expanding market. Through this extensive ecosystem, we envision a future that creates a seamless transition between the world as we know it, to the new worlds and experiences of the new digital reality.



* Bixby vision is currently available on the Galaxy S8, S8+, and Note8. 

Samsung Addresses IoT Data Security at the Chip Level with New Hardware/Software Turn-Key Solution


Samsung Electronics, a world leader in advanced semiconductor technology, today introduced its integrated Secure Element (SE) solution for Internet of Things (IoT) applications that offers a turn-key service for both hardware and software needs.


As the benefits of IoT devices grow, so does the importance of security throughout the network that spans from cloud, servers and hubs to the individual connected devices. With both hardware and software support for the security requirements in today’s IoT devices, Samsung’s SE solution will help chip manufacturers to easily employ reliable security features, and bring innovative products and services to market faster.


At the hardware level, Samsung’s SE will stop and reset itself the moment it detects abnormal activity, thus protecting the sensitive data stored within the security IC (integrated circuit). The SE adopts embedded flash (eFlash) for the first time in the industry at the 45-nanometer (nm) process node, which brings faster data processing and more flexible software modifications compared to traditional EEPROMs (electrically erasable programmable read-only memory).


Samsung’s dedicated software for the SE supports various tasks such as personal verification, security key storage, and encoding and decoding. This software also allows key and authentication information to be safely transferred between devices, servers and clouds.


“Securing personal information stored on electronic devices and in the cloud is a top priority,” said Ben K. Hur, Vice President of System LSI marketing at Samsung Electronics. “Samsung’s Secure Element solution is another demonstration of our advanced security technology that has been proven through mobile application processors (AP), smart card ICs and other semiconductor products. We believe that applications for Samsung’s SE solution will continue to diversify along with the vastly expanding IoT industry.”


The SE and developer board are on display at the Samsung Developer Conference on October 18 and 19, 2017, in San Francisco, USA.

Google Play’s Indie Games Contest is back in Europe. Enter now

Posted by Adriana Puchianu, Developer Marketing Google Play

Following last year's success, today we're announcing the second annual Google Play Indie Games Contest in Europe, expanding to more countries and bigger prizes. The contest rewards your passion, creativity and innovation, and provides support to help bring your game to more people.

Prizes for the finalists and winners

  • A trip to London to showcase your game at the Saatchi Gallery
  • Paid digital marketing campaigns worth up to 100,000 EUR
  • Influencer campaigns worth up to 50,000 EUR
  • Premium placements on Google Play
  • Promotion on Android and Google Play marketing channels
  • Tickets to Google I/O 2018 and other top industry events
  • Latest Google hardware
  • Special prizes for the best Unity games

How to enter the contest

If you're based in one of the 28 eligible countries, have 30 or less full time employees, and published a new game on Google Play after 1 January 2017, you may now be eligible to enter the contest. If you're planning on publishing a new game soon, you can also enter by submitting a private beta. Check out all the details in the terms and conditions. Submissions close on 31 December 2017.

Up to 20 finalists will showcase their games at an open event at the Saatchi Gallery in London on the 13th February 2018. At the event, the top 10 will be selected by the event attendees and the Google Play team. The top 10 will then pitch to the jury of industry experts, from which the final winner and runners up will be selected.

Come along to the final event

Anyone can register to attend the final showcase event at the Saatchi Gallery in London on 13 February 2018. Play some great indie games and have fun with indie developers,industry experts, and the Google Play team.

Enter now

Visit the contest site to find out more and enter the Indie Games Contest now.

How useful did you find this blogpost?

[Photo] SDC 2017 Keynote Speeches Outline Samsung’s New IoT Vision

This year’s Samsung Developer Conference brought together over 6,000 developers, innovators and Samsung partners from around the world for two jam-packed days of events that showcased how Samsung’s interconnected and intelligent services offer tools to build a more open and connected future.


Kicking off the full slate of presentations, discussions and hands-on labs held at San Francisco’s Moscone West convention center were keynote speeches from Samsung and tech industry leaders. The opening keynote event illuminated Samsung’s efforts to usher in the new phase of IoT that it’s calling “Intelligence of Things”, and showcased the exciting implications for developers and consumers.


Check out the highlights of the keynotes below.


SDC 2017 attendees take their seats before the start of the conference’s keynote event.


DJ Koh, Samsung Electronics’ President of Mobile Communications Business, opens the conference by discussing how Samsung’s leadership in hardware, IoT, artificial intelligence (AI) and augmented reality (AR) is bringing the company closer to realizing its vision of an innovative future created through “connected thinking”, where connected experiences are seamless and unified across devices.


During his speech, Mr. Koh announced that Samsung will unite its IoT services – including Samsung Connect, SmartThings and ARTIK – into a powerful, integrated platform called SmartThings, which will go beyond smartphones and ultimately unite the world’s largest ecosystem of mobile devices, appliances, TVs and IoT sensors.


Next, Injong Rhee, Samsung Electronics’ Chief Technology Officer and Executive Vice President of Software and Services, discussed some of the user-experience challenges that can arise with screen or touch-panel technologies, and how Samsung’s Bixby interface – which currently features over 10 million active users – alleviates them.


Mr. Rhee also demonstrated exciting ways that users will be able to interact with their connected services in the new Intelligence of Things era. To illustrate this point, he offered attendees a glimpse of Project Ambience – a prototype dongle that may be applied to a variety of electronics to make them Bixby-compatible.


Gilles BianRosa, Senior Vice President and Chief Product Officer for Samsung’s Visual Display Business, revealed that beginning in 2018, Samsung will offer Bixby-enabled TVs in the US and Korea. He followed up that announcement by discussing the exciting opportunities that the TVs’ Bixby integration and seamless connectivity will create, including intuitive voice controls and multi-device viewing experiences.


Yoon C. Lee, Samsung Electronics America’s Senior Vice President and Division Head of Content and Services, discussed how adding Bixby support and SmartThings integration to Samsung’s Family Hub refrigerators marks a big step – one that will offer developers “tremendous opportunities to develop new content, applications and experiences in areas like food, health, home management, entertainment, and more.”


Eui-Suk Chung, Samsung Electronics’ Executive Vice President and Head of the Service Intelligence Team, offered attendees a peek at Bixby 2.0 – an update of the intelligent assistant that’s even more ubiquitous, open and personal.


Dag Kittlaus, CEO and Co-Founder of Viv Labs, described the exciting benefits that Bixby 2.0 presents for developers. These include the abilities to build services without being limited by domains or interfaces, and make services compatible with any and all electronic devices. Mr. Kittlaus also announced that Bixby 2.0 is currently being introduced as a private beta SDK available to select partners, and is set to roll out in 2018.


Robert Parker, Samsung Electronics’ Chief Technology Officer of SmartThings, elaborated on how the platform’s support for broad integration allows developers to create seamless experiences that utilize the full Samsung device ecosystem, and highlighted how partners like ADT and NVIDIA are harnessing SmartThings to better serve their customers.


James Stansberry, the Samsung Strategy and Innovation Center’s (SSIC) Senior Vice President and General Manager of ARTIK IoT, outlined how Samsung’s ARTIK IoT platform will merge with SmartThings to become an ideal platform for developing enterprise-grade IoT products and services, and introduced new, secure system-on-modules (SoMs) that offer important security enhancements for IoT solutions.


Pranav Mistry, Samsung Electronics’ Senior Vice President of Research, shared some of Samsung’s exciting developments in AR, and discussed how the company incorporates advanced object recognition and spatial understanding software into its smartphones’ cameras to create new ways to interact with the world.


Google’s Vice President of Virtual and Augmented Reality, Clay Bavor, joined Mr. Mistry on stage to illustrate how Samsung and Google’s partnership on ARCore, an augmented reality SDK for Android, is opening the door for incredible AR innovations that will benefit developers and users. A preview of ARCore is currently supported by Samsung’s Galaxy S8, and support for the Galaxy S8+ and Note8 is coming soon.


Attendees examine the wide range of Samsung technologies that empower developers to design connected solutions that enrich daily life.

Crafting Double Exposures with Birgit Palma

Birgit Palma describes her creative process as, above all else, playful. So when we asked the Austrian-born and Barcelona-based artist, illustrator, and type designer to try out the new Logitech Craft Keyboard, she was game. We followed Birgit as she created a double exposure image with the new keyboard to see how it impacted her process.

Birgit started the double-exposure project as she always does – by finding two images that, on their surface, have nothing to do with one another – and imagining how she could invent a new visual story by merging them together in one composition. For this project she landed on the idea of blending the organic shapes of a portrait with the whimsical architecture of a Russian church.

After quickly masking and layering the selected images, Birgit began messing around in Adobe Photoshop CC. “My way of working includes a lot of playing around, I try out a million things just to see if I like the outcome. Craft’s input dial invites me to play – it’s easy to achieve different effects, opacities, brush sizes just by turning the Crown. It’s possible to test different effects in a faster way, it saves time and gives me a new layer of creative control.”

After settling on a rough composition, she works on the details that help transform the two images into one. “I then used the Crown to change the opacity of the top picture, the building, and to toggle through the Blend Modes I’d use for the Double Exposure.”

Birgit refined the piece with some gentle retouching – using the analog input dial to enlarge and shrink the brush size as she moves around the composition. And even though she’s quite far along in the process, Birgit is still, as she would put it, “playing around”.

“I still wasn’t sure if the piece would be black & white or color, so I just used the Crown to play with saturation to see what feels better. As a last step, I use a levels layers to give it the final accents and to create that fluent double-exposure effect.”

Birgit found that Craft and it’s input dial enhanced her experience creating in Photoshop. “Next to helping achieve a great deal of concentration it requires playful interaction. It’s refreshing to work with an element which is more sensitive and allows you to use it in different ways by turning & tapping. I’m working nearly 100% digitally, but I like the possibility of gaining more creative control outside the screen by using new techniques.”

Logitech will be in booth 205 at Adobe MAX 2017, demoing the new CRAFT Advanced Keyboard. Conference attendees, as well as design enthusiasts at home, can win one of 10 exclusive double exposure prints from Palma and a CRAFT keyboard during the show on Logitech’s Facebook and Twitter pages and each day in the booth. Adobe Creative Resident Jessica Bellamy will also create works of art from the Logitech booth.


Adobe note:

To provide a tight integration with the Adobe Creative Cloud, the Logitech team turned to the rich set of SDK and API services offered by the Adobe Creative Cloud Platform. Go to for more information.

Samsung Introduces New ARTIK™ Secure IoT Modules and Security Services to Deliver Comprehensive Device-to-Cloud Protection for IoT

Samsung Electronics today announced ARTIK™ secure “s” systems-on-modules and services for the ARTIK™ IoT Platform to strengthen edge security. Samsung ARTIK™ IoT platform now delivers device-to-cloud security for companies to build, develop and manage secure, interoperable, and intelligent IoT products and services for everything from smart homes to high-tech factories. In addition, Samsung announced that the ARTIK™ IoT platform will fully integrate with the SmartThings Cloud — Samsung’s new unified IoT platform. This will enable interoperability with both Samsung and third-party IoT devices and IoT cloud services.



The Samsung ARTIK™ IoT Platform with SmartThings Cloud will provide everything companies need to quickly develop secure IoT products and services including production-ready hardware, software and tools, cloud services and a growing partner ecosystem. The addition of new ARTIK™ secure IoT modules enables device-level protection for safe data exchange, interoperability, and secure access to ARTIK™ IoT services including device onboarding, orchestration, management, and over-the-air updates.


“Security in the age of IoT means new levels of complexity and risk. The next generation of IoT products and services will be more deeply integrated into our lives than ever before,” said James Stansberry, Senior Vice President and General Manager of ARTIK™ IoT, Samsung Electronics. “Unfortunately, most companies are not prepared to address the challenges of securing every link of the chain, from device to IoT cloud. With the ARTIK™ IoT Platform and our new security hardened system-on-modules, we make it easier and more affordable for companies to adopt best security practices and deliver trustworthy products that will shape the future of IoT.”



New ARTIK™ Secure Systems-on-Modules

The new ARTIK™ secure IoT modules combine hardware-backed security with pre-integrated memory, processing, and connectivity for a broad range of IoT applications, from simple edge nodes like sensors and controllers, to home appliances, healthcare monitors, and gateways for smart factories. This helps protect data and prevent devices from being taken over, disabled, or used maliciously.


“The New ARTIK™ secure systems-on-modules provide the performance and security features we need for our new Samsung IoT appliances,” said Youngsoo Do, Senior Vice President of Digital Control Group at Samsung Electronics. “Together, Samsung ARTIK™ platform and SmartThings Cloud will create opportunities to help us to get to market faster and safely deliver new services that will enrich the lives of our customers.”


ARTIK™ secure IoT modules provide a strong root of trust from device-to-cloud with a factory-injected unique ID and keys stored in tamper-resistant hardware. Samsung’s public key infrastructure (PKI) enables mutual authentication to the cloud to identify each device on the network and support whitelisting. Customers can use the new Secure Boot feature and code signing portal to validate software authenticity on start-up. In addition, the secure IoT modules provide a hardware-protected Trusted Execution Environment (TEE) with a secure operating system and security library to process, store, and manage sensitive resources, including keys and tokens on devices. Information is protected using FIPS 140-2 data encryption and secure data storage.


“Hardening the security of IoT devices once the product has been deployed is extremely difficult,” said Vikrant Gandhi, Digital Transformation Industry Director at Frost & Sullivan. “Manufacturers must use components that have built-in defenses for both device and data integrity. ARTIK™ IoT platform’s new security measures provide this protection from the start, and helps companies operate more safely in the Internet of Things.”


ARTIK™ “s” secure IoT modules will be available on November 30 through Samsung ARTIK™ channel partners. The ARTIK™ IoT Platform and ARTIK™ cloud services are available today.



Ecosystem Expansion

Samsung’s new SmartThings Cloud will unite all of the company’s existing IoT cloud services – Samsung Connect Cloud and ARTIK Cloud – into one consolidated IoT cloud platform. This seamless, open ecosystem will allow better and easier connectivity across an expanded set of devices and services from home to industrial applications.


In addition to today’s announcements, Samsung continues to expand the ARTIK™ IoT Platform and accelerate businesses’ path to profitability. This expansion includes the recent launch of the new ARTIK™ service to monetize data from interoperable devices and enable an IoT data economy, as well as the addition of Ubuntu, the industry’s most familiar Linux distribution, on ARTIK™ high performance system-on-modules. Earlier this month, Samsung – as a member of the Industry Council (ITI) – also announced a commitment to enable a National IoT Strategy Dialogue that is open, collaborative and secure.



For more information:

Samsung ARTIK™ platform, visit and

Samsung ARTIK™ marketplace, visit

[Editorial] Bixby 2.0: The Start of the Next Paradigm Shift in Devices


Today at the Samsung Developer Conference (SDC) 2017 in San Francisco, I was honored to share Samsung’s vision for intelligence, with the introduction of Bixby 2.0 – a powerful intelligent assistant platform that will bring a connected experience that is ubiquitous, personal, and open.  Bixby 2.0 will be a fundamental leap forward for digital assistants and represents another important milestone to transform our digital lives.


Today’s assistants are useful, but ultimately still play a limited role in people’s lives. People use them to set timers and reminders, answer trivial questions, etc. We see a world where digital assistant play a bigger role, an intelligent role, where one day everything from our phones, to our fridge, to our sprinkler system will have some sort of intelligence to help us seamlessly interact with all the technology we use each day.


To understand where we are going with devices, it is important to understand how far we’ve come. It is hard to believe, when you think back to just a decade ago, how we used our phones. It was likely a feature phone, which was primarily used to make calls and use a limited number of apps. But I knew, we at Samsung knew, that there were greater possibilities for the mobile phone. During the early and mid-2000, I led the team at Samsung in creating the Mobile Intelligent Terminal by Samsung (MITs), which was an early generation of smartphones before the Galaxy S and Note series were finally born. Powered by open API, application eco-system and innovative touch UI, smartphone market has, ever since then, exploded and became a life-essential tool for everyone. It also brought new opportunities for businesses and developers.


I believe that we are now on the cusp of the next major tectonic shift. With our long legacy as the global smartphone leader, we’ve been bringing meaningful innovation and driving digital transformation for the industry, and we are excited to help lead the change during another revolutionary moment. Personally, I am excited to be part of this next great era and this shift is one of the reasons why we are motivated to continue to develop our intelligent assistant, Bixby.


We introduced Bixby to the world earlier this year with the launch of our flagship mobile devices, the Galaxy S8 and S8+ and the Note 8. Bixby is now available in over 200 different countries, with more than 10 million registered users. But this is just a start. When we launched Bixby, we focused on how it could help people with getting the most out of their smartphones and apps to make their life easier. We integrated a few close partner applications on the device to make those devices more intelligent. We created multi-step, and cross-app capabilities that have allowed more than millions of Bixby users get things done faster and easier, but it is just a stepping stone for us.


Now, we are ready to take Bixby to the next level. Bixby 2.0 is a bold reinvention of the platform. A reinvention aimed at transforming basic digital assistants from a novelty to an intelligence tool that is a key part of everyone’s daily life.


Bixby 2.0 will be ubiquitous, available on any and all devices. This means having the intelligence of Bixby, powered by the cloud, act as the control hub of your device ecosystem, including mobile phones, TVs, refrigerators, home speakers, or any other connected technology you can imagine. Soon, developers will be able to put their services on any and all devices and will not have to reinvent their services each time they support a new device.


It will be more personal, with enhanced natural language capabilities for more natural commands and complex processing, so it can really get to know and understand not only who you are, but who members of your family are, and tailor its response and actions appropriately.


And finally, and most importantly, Bixby 2.0 will be open. We know Samsung cannot deliver on this paradigm shift by ourselves – it can only happen if we all, across all industries, work together, in partnership. With Bixby 2.0, the doors will be wide open for developers to choose and model how users interact with Bixby in their services across all application domains e.g., sports, food, entertainment, or travel – the opportunities are truly endless.


Starting today, we’re announcing our first private beta program with Bixby SDK, which will be available for select developers. We will work as one team, innovating, collaborating, and bringing Bixby 2.0 to life. Over time, we will increase the number of participants in the beta, and ultimately make the Bixby SDK available to all developers.


Bixby 2.0 will ultimately be a marketplace, for intelligence. A new channel for developers to reach users with their service, not just on mobile devices, but through all devices. Over time, we will rollout variety of revenue models to maximize our partners’ business opportunities in this new paradigm. Hopefully making it as fruitful as the move from feature to smartphones was for our partners.


The future is yours. Whatever your technology platform, device category or industry sector: we call on all developers to unleash their creativity and help us to democratize intelligence, so that we can move beyond devices and into an open, connected ecosystem to simplify the lives of everyone.

Samsung’s SmartThings Cloud: Bringing the IoT Dream to Life

There was a time, not so long ago, that the idea of a small device as the absolute center of all your communications was a fantasy. These days, most of us cannot remember what life was like before smartphones.  When it comes to the Internet of Things (IoT), most people still see the idea of a seamlessly connected “smart” home as something well into the future; something we see in movies but beyond our reach.  Despite compelling industry statistics (according to the technology experts at Gartner, in a little more than two years, there will be 20.4 billion connected devices in the world), the reality of IoT for many is one rife with complications and unanswered questions.


The visual of IoT is a compelling one: With a simple command or touch of a button, you can control the lightbulbs in your living room, lock your back door and set the alarm for the garage door. And while developers and manufacturers all around the world are working hard to create their own smart IoT devices and solutions, few are compatible with each other. What this means for consumers who are keen to make their homes “smart” is a lot of work: how many different apps and what level of tech expertise will they need in order to have their devices work in sync? There are also concerns around privacy and security – surely all this technology means more risk of exposure. While there’s great appeal in the fantasy of the “smart” home, the barrier is all the work it will take to make this a reality.



Samsung believes in a future where our digital world is as ubiquitous and essential as electricity, with security so effective you almost won’t have to think about it. It will just work, on demand and just the way you want it; no need to interact with many different apps to achieve results. With our unified approach, we will deliver a truly integrated experience that touches every single “thing” around our customers – while ensuring privacy and security.


For over 40 years, the user experience is what has driven Samsung to develop technologies that – to put it simply – make people’s lives easier. While the features and products are important, the inspiration behind our IoT offering is the same: the tangible benefits for consumers. With Samsung Connect bringing together our appliances, TVs and other everyday items (such as lightbulbs and locks) to one touch control on your smartphone, ARTIK as our secure IoT platform, and of course, with SmartThings’ open and easy integration solution , we are closer than ever in transforming the “smart” home fantasy into a reality. What’s more, with the introduction of new cellular network infrastructure that can better support IoT, we are looking to expand the IoT experience beyond the boundaries of the smart home. Most importantly, we are removing the barriers and bringing them all together in one, integrated IoT experience.



Today at the Samsung Developer Conference (SDC 2017), we are proud to announce SmartThings Cloud, which will unite all of our IoT clouds into a single powerful platform. Through this new platform and vision for more powerful intelligence through Bixby 2.0, also announced today at SDC 2017, we are working to democratize IoT with intelligence that is easy to use and accessible for everyone.


The newly united SmartThings  will be an open ecosystem, ready to work with not only Samsung devices, but a wide range of connected devices. From early next year, consumers will have the freedom to choose from the world’s broadest range of IoT devices and control them using just a single app. Whether it’s price or design or functionality driving their purchase decision, consumers can now select devices that suit their budget and lifestyle with no constraints.  This spirit of simplicity is also relevant to developers, as they need to code for just one single API to give their products and services instant compatibility across over 1 billion devices. Additionally, it’s a cloud that is designed for intelligence not as a bolt-on. The end result is a seamless experience that will enhance the lives of our consumers in a way they’ve only dreamed of.


As part of the world’s largest open ecosystem of IoT devices, with products that ‘Works with SmartThings’ or even ‘Works as SmartThings Hub,’ our partners are integral in delivering this ultimate seamless experience to consumers. Many are already harnessing the power and simplicity of the SmartThings Cloud to bring next-generation solutions to their customers. We want to help them drive change through scale and have a unique opportunity to grow their IoT business.



With the new and – most importantly – open SmartThings Cloud, Samsung is connecting the fabric of the IoT experience and leading the democratization of the Internet of Things. Imagine turning on your favorite playlist, adding a few items to your shopping list and dimming the living room lights with simple voice commands, all in quick succession. Imagine devices from different manufacturers and service providers no longer jostling for supremacy – they all just simply work together. This is what meaningful changes in technology ultimately delivers: A better, more enhanced experience that exceeds consumers’ expectations and simply delivers more. For more information on SmartThings Cloud, please visit our SmartThings Developer Website.

Experience Samsung 360 Round, a High-Quality Camera for Creating and Livestreaming 3D Content for Virtual Reality (VR)


Samsung Electronics introduces the 360 Round, a new camera for developing and streaming high-quality 3D content for specialists and enthusiasts who demand a superior virtual reality (VR) experience. Announced at the Samsung Developer Conference (SDC 2017), the Samsung 360 Round uses 17 lenses—eight stereo pairs positioned horizontally and one single lens positioned vertically—to livestream 4K 3D video and spatial audio, and create engaging 3D images with depth.


The 360 Round’s durable, compact design features IP651 water and dust resistance for use in everyday weather conditions and a fanless design to reduce weight and eliminate background noise. With additional features, including PC software for controlling and stitching and expandable external storage2, the 360 Round provides long lasting shooting for any sized job.


The growth of 360 content platforms, such as Samsung VR, Facebook and YouTube, as well as the spread of 360 videos through major media, has increased the need for high-quality 360 videos among VR professionals and enthusiasts. The 360 Round is the first product to meet these needs by combining high-quality, 360-degree imagery with advanced 3D depth at a reasonable price compared to other professional 360 cameras.


“The Samsung 360 Round is a testament to our leadership in the VR market. We have developed a product that contains innovative VR features, allowing video producers and broadcast professionals to easily produce high quality 3D content,” said Suk-Jea Hahn, Executive Vice President of Samsung Electronics’ Global Mobile B2B Team. “The combination of livestreaming capabilities, IP65 water and dust resistance and 17 lenses makes this camera ideal for a broad range of use cases our customers want—from livestreaming major events to filming at training facilities across various industries.”


The Samsung 360 Round combines high-quality images with a durable design and a content management software solution that allows VR directors to transform virtual reality through a complete set of advanced features.



High-Quality 360 Content

The 360 Round offers high-quality 3D images with a 4K camera, thanks to 17 paired lenses that capture a 360-degree view for a full 3D experience. In addition, the 360 Round enables live streaming with little-to-no latency3 and broadcasts easier than ever, with one-step stitching and control software provided by Samsung.




The Samsung 360 Round uses a uni-body chassis designed to reduce heat, removing the need for a cooling fan and minimizing size and weight. The compact design helps eliminate excess noise and reduce power consumption for hours of continuous shooting. Additionally, the 360 Round is IP651 dust and water resistant, making it an ideal choice for capturing content in the most challenging environments. With expandable connectors and ports, the 360 Round is designed to easily and quickly connect to additional equipment, such as an external mic, and storage for saving large files.


The Samsung 360 Round will be available in October in the United States before expanding to other markets over time. For more information please visit or



360 Round Product Specification

  360 Round
Camera 17 cameras with:
– 1/2.8’’, 2M image sensor
– F1.8 Lens
Audio – 6 internal microphones for spatial audio
– 2 external microphone ports supported
Video Resolution:
– Livestreaming (3D): 4096 x 2048 at 30fps per eye
– Livestreaming (2D): 4096 x 2048 at 30fps
– Recording (3D): 4096 x 2048 at 30fps per eye
– Recording (2D): 4096 x 2048 at 30fps
MP4 (H.265/ H.264)
3D: 4k x 2k per eye / 2D: 4k x 2k
Internal: LPDDR3 10GB, eMMC 40GB
External: UHS-II SD Card (up to 256GB), SSD (up to 2TB)
Connectivity LAN, USB Type-C
Sensors Gyrometer and Accelerometer
Power 19V 2.1A Power input (with AC adaptor)
PC Software Requirements 2 PC software (for Camera Control / Streaming, Content Viewing)
For Post Processing:
– Windows 10
– 64-bit OS for 4K video editing
– 16 GB DDR4 RAM 2ea or more
– 850W power
– Intel Core i7-6700K or above
– GPU NVIDIA GTX 1080 x 1ea
For Preview and Live Broadcast (as above):
– Intel Core i7-6950X or above
– 32GB DDR4 RAM 2ea or more
– GPU NVIDIA GTX 1080 Ti x 2ea
Dimension 205 x 205 x 76.8mm, 1.93kg
Features IP65 Dust and Water resistance


1When using this feature, please put AC adaptor in a waterproof pack

2Sold separately

3Depending on network connections

*All functionality, features, specifications and other product information provided in this document including, but not limited to, the benefits, design, pricing, components, performance, availability, and capabilities of the product are subject to change without notice or obligation.

Samsung Shares Vision for an Open and Connected IoT Experience at Samsung Developer Conference 2017

Samsung Electronics today shared its vision for a connected world with a widely accessible and open Internet of Things (IoT) platform. At the Samsung Developer Conference (SDC) 2017, held at Moscone West, Samsung also announced that it will unite its IoT services under SmartThings, announce Bixby 2.0 with an SDK and advance its leadership in augmented reality (AR), to usher in an era of connected, seamless experiences that span across devices, software and services.


“At Samsung, we’re constantly innovating in order to deliver smarter, connected experiences for our consumers. Now we’re taking a big step forward with our open IoT platform, intelligent ecosystem and AR capabilities,” said DJ Koh, President of Mobile Communications Business, Samsung Electronics. “Through an extensive open collaboration with our business partners and developers, we are unlocking a gateway to an expanded ecosystem of interconnected and intelligent services that will simplify and enrich everyday life for our consumers.”


Samsung also demonstrated Project Ambience, a small dongle or chip that can be applied into a wide variety of objects, allowing them to seamlessly connect, and create a ubiquitous Bixby experience. This new concept reflects the next generation of IoT, that of the “Intelligence of Things,” which combines IoT and intelligence to make your life easier.



Democratizing the Internet of Things

Samsung is combining its existing IoT services—SmartThings, Samsung Connect, and ARTIK—into one united IoT platform: SmartThings Cloud, which will provide a single, powerful cloud-based hub that can seamlessly connect and control IoT-enabled products and services from a unified touchpoint. SmartThings Cloud will build one of the world’s largest IoT ecosystems, and will provide the infrastructure for a connected consumer experience that is innovative, versatile and holistic.


With SmartThings Cloud, developers will have access to one cloud API across all SmartThings-compatible products to build their connected solutions and bring them to more people. It will also provide secure interoperability and services for business developing commercial and industrial IoT solutions.



Next-Generation Intelligence

Samsung is moving intelligence beyond devices and into an ubiquitous, personal and open ecosystem with introduction of Bixby 2.0 with SDK, integrated with Viv technologies.


Bixby 2.0 will be available on a variety of devices including Samsung Smart TV and Family Hub refrigerator. Now, Bixby will sit at the center of consumers’ intelligent ecosystem. Bixby 2.0 will introduce deep linking capabilities, and enhanced natural language abilities—to better recognize individual users, and create a predictive, personalized experience that better anticipate their needs.


In order to build this faster, easier and more powerful intelligent assistant platform, Samsung will provide the tools to bring Bixby 2.0 into a wider number of applications and services. The Bixby SDK will be available to select developers and through a private beta program, with general availability coming in the near future.



Leadership in Augmented Reality

Building upon Samsung’s heritage of creating pioneering technology that delivers incredible experiences to unlock new realities such as VR, the company is committed to continuing to advance technologies in AR. Through a partnership with Google, developers will be able to use the ARCore SDK to bring AR to millions of Samsung consumers on the Samsung Galaxy S8, Galaxy S8+ and Galaxy Note8. This strategic partnership with Google offers new business opportunities for developers, and a new platform for creating immersive new experiences for consumers.


For more information about the Samsung Developer Conference, visit or follow @samsung_dev on Twitter.

HP: Reinventing The Magic of Printing

User experience (UX) design is about much more than just creating an interface that is beautiful to look at. The best UX design allows users to intuitively grasp how something works. Every choice, from color to size to placement, can change a user’s experience from frustration into delight and help the customer get the most from their technology.

For the past ten years, J.D. Knight has worked to create user-friendly interfaces for HP’s industry-leading printers and accessories. Ten years ago, user-interfaces referred to the small electronic screens and interfaces on the printers themselves. J.D. introduced print screen animation, animated product demos, and other interface animations that not only looked good, but helped users quickly grasp how their HP products worked.

Today, the user interface has jumped from printers to smartphones. J.D. now works as part of the Global Experience Design team to create mobile experiences that delight users. One of the projects he recently worked on was the HP Smart App, which enables customers to use their smartphone to share or print documents and images from email, text messages, social media, or cloud storage services.

The Global Experience Design team uses Adobe XD CC to collaborate on the app prototypes for Apple, Windows, and Android devices. After working with many other prototyping tools, J.D. was happy to move to Adobe XD and consolidate all app prototyping in a single tool. “Adobe XD CC was looking like a one-stop shop for app prototyping and I wanted to be a part of the experience,” he says.

For J.D., the attraction to an app wasn’t just its mobility. People enjoy apps that personalize experiences and help them make the app truly their own. The HP Smart App uses a tile layout featuring colored tiles representing different functions and image tiles representing content types. J.D. used Adobe Stock to find images to fit the different content types and inspire users to make the app their own.

Users can rearrange and personalize the layout at will. They can select a favorite image from their camera roll and use it as the tile for specific types of content. If a user frequently prints photos from Facebook, they can place a tile representing Facebook front and center.

J.D. and the Global Experience Design team at HP are happy to continue providing input to the Adobe XD team so they can add features and functionality that make a difference for UX designers.

Read more about how HP is using Adobe XD.

A focus on portrait mode: behind the scenes with Pixel 2’s camera features

This week the Pixel 2 and Pixel 2 XL, Google’s newest smartphones, arrive in stores. Both devices come with features like Now Playing, the Google Assistant, and the best-rated smartphone camera ever, according to DXO.

We designed Pixel 2’s camera by asking how we can make the camera in your Pixel 2 act like SLRs and other big cameras, and we’ve talked before about the tech we use to do that (such as HDR+). Today we’re highlighting a new feature for Pixel 2’s camera: portrait mode.

With portrait mode, you can take pictures of your friends and family (that includes your pets too!) that keep what’s important sharp and in focus, but softly blur out the background. This helps draw your attention to the people in your pictures and keeps you from getting distracted by what’s in the background. This works on both Pixel 2 and Pixel 2 XL, on both the rear- and front-facing cameras.

Pictures without (left) and with (right) portrait mode. Photo by Matt Jones

Technically, blurring out the background like this is an effect called “shallow depth of field.” The big lenses on SLRs can be configured to do this by changing their aperture, but smartphone cameras have fixed, small apertures that produce images where everything is more or less sharp. To create this effect with a smartphone camera, we need to know which parts of the image are far away in order to blur them artificially.

Normally, to determine what’s far away with a smartphone camera you’d need to use two cameras close to each other, then triangulate the depth of various parts of the scene—just like your eyes work. But on Pixel 2 we’re able to combine computational photography and machine learning to do the same with just one camera.

How portrait mode works on the Pixel 2

Portrait mode starts with an HDR+ picture where everything is sharp and high-quality.

Next, our technology needs to decide which pixels belong to the foreground of the image (a person, or your dog) and which belong to the background. This is called a “segmentation mask” and it’s where machine learning comes in. We trained a neural network to look at a picture and understand which pixels are people and which aren’t. Because photos of people may also include things like hats, sunglasses, and ice cream cones, we trained our network on close to a million pictures—including pictures with things like those!

Just creating two layers—foreground and background, with a hard edge in between them—isn’t quite enough for all pictures you’d want to take; SLRs produce blur that gets stronger with each fraction of an inch further from the thing that’s in sharp focus. To recreate that look with Pixel 2’s rear camera, we use the new Dual Pixel sensor to look through the left and right sides of the camera’s tiny lens at the same time—effectively giving us two cameras for the price of one. Using these two views we compute a depth map: the distance from the camera to each point in the scene. Then we blur the image based on the combination of the depth map and the segmentation mask.

The result? Portrait mode.

Pictures without (left) and with (right) portrait mode. Photo by Sam Kweskin

Portrait mode works a little differently on the front-facing camera, where we aren’t able to produce a depth map the same way we do with the more powerful rear-facing camera. For selfies, we just use our segmentation mask, which works particularly well for selfies since they have simpler compositions.

Selfie without (left) and with (right) portrait mode. The front-facing camera identifies which background pixels to blur using only machine learning—no depth map. Photo by Marc Levoy

When and how to use portrait mode

Portrait mode on the Pixel 2 is automatic and easy to use—just choose it from your camera menu then take your picture. You can use it for pictures of your friends, family, and even pets. You can also use it for all kinds of “close-up” shots of objects such flowers, food, or bumblebees (just don’t get stung!) with background blur.

Close-up picture without (left) and with (right) portrait mode. Photo by Marc Levoy

Here are some tips for how to take great portraits using any camera (and Pixel 2 as well!):

  • Stand close enough to your subjects that their head (or head and shoulders) fill the frame.
  • For a group shot where you want everyone sharp, place them at the same distance from the camera.
  • Put some distance between your subjects and the background.
  • For close-up shots, tap to focus to get more control over what’s sharp and what’s blurred. Also, the camera can’t focus on things closer than several inches, so stay at least that far away.
To learn more about portrait mode on the Pixel 2, watch this video by by Nat & Friends, or geek out with our our in-depth, technical post over on the Research blog.
How Google Built the Pixel 2 Camera

Adobe Enhances Partner Ecosystem at MAX

Consumer expectations are at an all-time high and brands must create and deliver amazing experiences at every turn to succeed. Each company has a unique set of business challenges it must address on the path to becoming an experience-led business. Tailored, innovative technology is required to make this a reality.

Through our partner program, we’re helping agencies, systems integrators, independent software vendors (ISV), and technology companies empower their customers to lead with experience and grow their business through a deeper relationship with Adobe. Our partner ecosystem includes more than 4,700 agencies, systems integrators, ISVs, and technology partners worldwide.

Today at the Adobe MAX conference, we’re excited to announce new advancements across our global partner ecosystem.

Tata Consultancy Services Standardizing on Adobe XD
Today, we launched Adobe XD, an all-in-one design solution that enables experience design teams to design, prototype and share websites and mobile applications at scale. I’m excited to announce our first partner and systems integrator to standardize on the app Tata Consultancy Services (TCS), a leading global IT services, consulting and business solutions organization. TCS will leverage Adobe XD as its main creative solution for UX and UI design internally, as well as with its clients, starting in its Digital Interactive practice.

Courtesy of Tata Consultancy Services

TCS advises the world’s largest brands across industries – retail, consumer, technology and more – on digital strategies spanning design, content, technology, and performance. As TCS looks to grow its design talent across 45 countries, Adobe XD will enable the company to scale quickly and collaborate more effectively internally and with clients.

When I spoke with Sunil Karkera, the global head of TCS Digital Interactive, about the news he said, “Adobe XD gives us an end-to-end tool to create experience design at scale. This increases productivity for our global creative talent and streamlines design collaboration with our customers.”

With a relationship exceeding 13 years with Adobe, TCS has a record number of Adobe solutions deployed for companies in many different industries across the globe. TCS is excited that Adobe XD integrates with Creative Cloud – which is used by their own in-house designers and their global clients every day. The sharing, commenting and collaboration features in Adobe XD are powerful to TCS design teams – enabling them to work more effectively together on designs and to be able to share and keep design assets in sync.

Introducing the New Adobe Exchange Marketplace

Today, we’re also unveiling the next iteration of the Adobe Exchange marketplace. Brands benefit from access to all third-party applications across Adobe Creative Cloud, Document Cloud and Experience Cloud in one central location. Previously, customers could only access third-party applications via different portals for each Adobe cloud.

The Adobe Exchange marketplace houses thousands of pre-built applications that connect Adobe solutions and best-of-breed third-party technology providers. These applications empower customers to truly customize their Adobe solutions to fit their unique business needs. In addition to the new marketplace, we’re also announcing several third-party applications from leading creative companies, built using Creative Cloud SDKs and APIs. They will all soon be available on the new Adobe Exchange marketplace:

  • – Simplifies video editing by enabling designers to move videos and comments seamlessly between and Adobe Premiere Pro and Adobe After Effects.
  • Jira – Streamlines collaboration between designers and developers by allowing designers to attach designs and assets created in Adobe Creative Cloud to an Atlassian Jira project without leaving Adobe Creative Cloud.
  • Logitech – Enables greater control of the creative process for designers using the new Craft keyboard, featuring a creative input dial that adapts to what the designer is creating in Adobe Creative Cloud.
  • Microsoft Speeds up creative feedback, iteration, and decision-making by giving designers the ability to share Adobe Creative Cloud assets and Adobe Stock images in Microsoft Teams.
  • PageProof – Streamlines and simplifies the review and approval process by letting designers send out their creative assets for real-time feedback, without leaving their Adobe Creative Cloud applications.
  • Pantone – Saves time and resources by giving designers access to more than 2,000 Pantone colors in Adobe Illustrator in order to preview how they would appear on 28 different packing materials, inks and print processes.
  • Workfront – Speeds up the review and approval process by giving designers the ability to save and export assets created in Adobe Creative Cloud to Workfront.
  • Wrike – Streamlines project management by integrating Wrike’s functionality – like finding a task, adding comments, and marketing tasks as complete – directly into Adobe Creative Cloud.

Partners can start adding listings and customers will be able to access the listings in the new Adobe Exchange marketplace later this year. Over the next few months, we’ll also be adding new functionality into the Adobe Exchange marketplace like private sharing. This will give partners the ability to share an integration privately for beta testing or for a customer to distribute internal apps to employees.

As consumer expectations continue to grow, so does our need to innovate and help customers build against those expectations. Our work with partners is an integral part of Adobe’s vision to help customers meet the future needs of consumers everywhere. If you’re interested in learning more about how to get involved in Adobe’s global partner program, visit

Fighting phishing with smarter protections

Editor’s note: October is Cybersecurity Awareness Month, and we're celebrating with a series of security announcements this week. This is the third post; read the first and second ones.

Online security is top of mind for everyone these days, and we’re more focused than ever on protecting you and your data on Google, in the cloud, on your devices, and across the web.

One of our biggest focuses is phishing, attacks that trick people into revealing personal information like their usernames and passwords. You may remember phishing scams as spammy emails from “princes” asking for money via wire-transfer. But things have changed a lot since then. Today’s attacks are often very targeted—this is called “spear-phishing”—more sophisticated, and may even seem to be from someone you know.

Even for savvy users, today’s phishing attacks can be hard to spot. That’s why we’ve invested in automated security systems that can analyze an internet’s-worth of phishing attacks, detect subtle clues to uncover them, and help us protect our users in Gmail, as well as in other Google products, and across the web.

Our investments have enables us to significantly decrease the volume of phishing emails that users and customers ever see. With our automated protections, account security (like security keys) and warnings, Gmail is the most secure email service today.

Here is a look at some of the systems that have helped us secure users over time, and enabled us to add brand new protections in the last year.

More data helps protect your data

The best protections against large-scale phishing operations are even larger-scale defenses. Safe Browsing and Gmail spam filters are effective because they have such broad visibility across the web. By automatically scanning billions of emails, webpages, and apps for threats, they enable us to see the clearest, most up-to-date picture of the phishing landscape.

We’ve trained our security systems to block known issues for years. But, new, sophisticated phishing emails may come from people’s actual contacts (yes, attackers are able to do this), or include familiar company logos or sign-in pages. Here’s one example:

Screenshot 2017-10-11 at 2.45.09 PM.png

Attacks like this can be really difficult for people to spot. But new insights from our automated defenses have enabled us to immediately detect, thwart and protect Gmail users from subtler threats like these as well.

Smarter protections for Gmail users, and beyond

Since the beginning of the year, we’ve added brand new protections that have reduced the volume of spam in people’s inboxes even further.

  • We now show a warning within Gmail’s Android and iOS apps if a user clicks a link to a phishing site that’s been flagged by Safe Browsing. These supplement the warnings we’ve shown on the web since last year.


  • We’ve built new systems that detect suspicious email attachments and submit them for further inspection by Safe Browsing. This protects all Gmail users, including G Suite customers, from malware that may be hidden in attachments.
  • We’ve also updated our machine learning models to specifically identify pages that look like common log-in pages and messages that contain spear-phishing signals.

Safe Browsing helps protect more than 3 billion devices from phishing, across Google and beyond. It hunts and flags malicious extensions in the Chrome Web Store, helps block malicious ads, helps power Google Play Protect, and more. And of course, Safe Browsing continues to show millions of red warnings about websites it considers dangerous or insecure in multiple browsers—Chrome, Firefox, Safari—and across many different platforms, including iOS and Android.

pastedImage0 (5).png

Layers of phishing protection

Phishing is a complex problem, and there isn’t a single, silver-bullet solution. That’s why we’ve provided additional protections for users for many years.

pasted image 0 (5).png
  • Since 2012, we’ve warned our users if their accounts are being targeted by government-backed attackers. We send thousands of these warnings each year, and we’ve continued to improve them so they are helpful to people. The warnings look like this.
  • This summer, we began to warn people before they linked their Google account to an unverified third-party app.
  • We first offered two-step verification in 2011, and later strengthened it in 2014 with Security Key, the most secure version of this type of protection. These features add extra protection to your account because attackers need more than just your username and password to sign in.

We’ll never stop working to keep your account secure with industry-leading protections. More are coming soon, so stay tuned.

Redefining Modern Creativity with the Next Generation of Creative Cloud

More people are telling stories than ever before. And at Adobe, we strive to enable that expression by making our creative tools accessible to all at any time, in any place. That’s why today at Adobe MAX we announced the next generation of Creative Cloud.

By introducing a range of new applications in Creative Cloud, along with significant innovations across our flagship tools, we’re enabling creative professionals and enthusiasts alike to express themselves with apps and services that connect across devices, platforms and geographies.

We built the next generation of Creative Cloud in collaboration with you – the creative community – collecting your feedback every step of the way. Three themes guided us throughout our journey to redefine modern creativity:

First, bringing about next-generation experiences that embrace a truly modern approach, so you can work anywhere and any way you want.

Second, increasing accessibility to creativity by investing in assets, education and support to make it easier for you to be successful using any platform.

Lastly, accelerating creativity with Adobe Sensei by embedding artificial intelligence capabilities into our products, which make great design more accessible for everyone.

Next-Generation Experiences in Creative Cloud

The extensive advancements unveiled today help accelerate the creative process for both creative professionals and enthusiasts.

  • To empower photographers of all levels, we have built the all new Lightroom CC, a cloud-centric photo service for editing, organizing, storing, and sharing photos – from anywhere. For photographers who prefer a more traditional desktop-first workflow, we have also brought performance and editing improvements to Photoshop Lightroom Classic CC, previously known as Photoshop Lightroom CC.
  • For designers, we’re introducing two new applications that help you expand your skillsets. Previously in beta, Adobe XD CC is the all-in-one cross-platform solution for designing and prototyping mobile apps and websites. Additionally, Dimension CC, previously called Project Felix, enables graphic designers with no 3D experience to quickly create and iterate on photorealistic 3D images.
  • We also launched Character Animator CC, a 2D animation tool previously in beta, that helps bring still image artwork from Photoshop CC or Illustrator CC to life.

Download or update the latest from Creative Cloud:

Accessibility to Creativity

We hear you when you say you want to learn, share and be inspired as part of a broader creative community. We see this manifest on Behance, where nearly 10 million people showcase their creative work. Today, we are excited to launch AdobeLIVE on Behance, a live streaming channel for learning and inspiration for the community from the community.

As part of this announcement, we’re also empowering the creative community with new integrated assets, expanded services and educational resources that enable Creative Cloud users of all levels realize their creative potential.

For example, Adobe Stock has expanded its asset collection with the introduction of hundreds of professionally-created motion graphics templates for video users in Premiere CC and After Effects CC. And Adobe Typekit now leverages Adobe Sensei to provide a whole new way to visually and easily search for fonts.

Accelerating Creativity with Adobe Sensei

We understand the pace at which you create is constantly speeding up, and with Creative Cloud we’re helping you go from concept to completion much faster. That’s why we’re combining the power of our creative community and Creative Cloud tools with creative intelligence in Adobe Sensei, our AI and machine learning framework. As part of this Creative Cloud release, we continue to embed capabilities powered by Adobe Sensei across our solutions to give you more time to focus on what you do best – create and innovate. From the curvature pen tool in Photoshop and Illustrator to auto-lip sync in Character Animator, Adobe Sensei enables our tools – and you – to work more effectively and efficiently across your digital canvas.

With the AI revolution emerging as one of the most profound technological paradigm shifts, we’re embracing it with Adobe Sensei to amplify human creativity and intelligence. By blending the art of creativity with the science of data, Adobe Sensei will help free you from mundane tasks and unleash your creativity. We’re constantly working on new innovations that bring the magic of Adobe Sensei to the tip of your brush and transform the entire creative workflow.

Adobe is invested in accelerating the creative process for everyone with the world’s best creative apps. After more than five years of continuous innovation, we’re making a modest adjustment in Creative Cloud commercial pricing for North America customers, which will take effect on March 1 or at the next contract renewal. Until then, renewing subscribers can experience the value of the new features and products announced and available today and new subscribers can lock in a year subscription at the current price with no additional charge:

  • Our current STE Student/Education, Creative Cloud Photography and XD plans will see no pricing adjustment.
  • Creative Cloud for Individuals All and Single App plans will increase by 6%. For example, the new price for Creative Cloud All Apps annual plan will be $52.99 per month from $49.99.
  • Creative Cloud for Teams plans will increase by 14%. For example, the new price for Creative Cloud for Teams All Apps annual plan will be $79.99 per month from $69.99.

For more on what we’re sharing with the creative community at Adobe MAX this year, watch the Adobe MAX keynotes or learn more on

Welcome Adobe XD CC

I’m delighted to announce the 1.0 release of Adobe XD today. Since the first beta release, we have been laser-focused on building a solution that enables fast-paced, iterative design – one that eliminates the need to jump between multiple tools and services, one that prioritizes quality and speed over the number of features, and one that performs flawlessly on either macOS or Windows 10 hardware.

Journey to 1.0

The public beta gave designers visibility into our progress, and the opportunity to provide feedback to help shape XD’s future. With over one million community members who have downloaded XD, we have pored over every single idea, comment, and suggestion.

If you shared your thoughts with us, whether it was via UserVoice, social media, or in-person at a conference – our entire team would like to thank you for being a part of this journey, and for helping us define where we go next.

What’s in 1.0

Design, Prototype, and Share: XD lets you wireframe, create low or high fidelity visual designs, define navigation flows and transitions between artboards, preview and share interactive prototypes, gather feedback from stakeholders, and export assets for production use.

Re-imagined for modern workflows: As you’re using XD, you’ll notice that defining repeating elements, masking images, managing colors, styles, and symbols, and staying organized with layers are faster and more intuitive than expected.

Quality and performance above all: XD’s top two features are speed and stability – from its near-instantaneous launch time, lightning fast zooming and panning (even if you have hundreds of artboards in a single project), and unparalleled stability across platforms.

What’s Next

While 1.0 represents a major milestone, we consider it a foundational release. Our ambition is to make XD the only UX design solution you will ever need, delivering regular updates with more features and functionality based on your feedback, all while keeping XD fast and nimble. Please visit my blog post here for details on what you can expect to see in upcoming releases.

Designers Love XD

We believe XD’s core capabilities, reimagined workflows, quality and performance all contribute to making XD a joy to use. See what designers have already made with XD, and watch how Boosted used XD to design a platform that connects their community members together. This is just one example of how companies are already using XD every day.

Getting started with XD

With the introduction of XD 1.0, now is a great time to design, prototype and share user experiences. We are committed to preserving its quality and performance with each subsequent release while adding the capabilities you need to create richer experiences and to collaborate with your team. You can get XD for only US $9.99/month or as part of the Creative Cloud All Apps plan. Click here to download XD, or visit our Plans Page to learn more. We’re looking forward to seeing your designs – don’t forget to share and tag your projects with “#madewithadobexd.”

Thank you again for helping us to reach this milestone – your ongoing feedback and suggestions are gratefully received by our team on UserVoice, and you can always reach out to us on Twitter (@AdobeXD).

Adobe and Coca-Cola Go for Gold to Support Special Olympics

Adobe has a storied history of collaborating with leading brands to give creatives access to exclusive content and opportunities to flex their ingenuity in front of an international audience. Continuing in this spirit, we’re thrilled to announce our new groundbreaking collaboration with a global design icon. At Adobe MAX 2017 – the world’s premier creativity and design conference – we launched Coke x Adobe x You.

For this exclusive campaign with Adobe, Coca-Cola is providing the world’s largest creative community with full access to a unique creative assignment and iconic assets including the Coca-Cola Spencerian script and the Coca-Cola dynamic ribbon.

From graphic designer Gemma O’Brien of Australia to photographer Guy Aroch of Israel, Adobe and Coca-Cola briefed 15 creative professionals from Spain to Germany, South Africa to Japan, to develop an inspired work of art. Now, we invite you to experience this brief firsthand to create your own version inspired by Coca-Cola , using Adobe Creative Cloud. For every submission received by December 31, 2017, Coca-Cola will donate to Special Olympics (up to $30,000).

“With this collaboration, we’re pleased to bring Adobe’s global creative community together for the opportunity to participate in a brief with an influential brand such as Coca-Cola,” said Jamie Myrold, Vice President of Design at Adobe, “Every submission contributes toward our shared vision of designing for good, which is at the forefront of what we do every day.”

Driven by our shared values of design, quality and brand integrity in an increasingly multi-sensory world, Adobe and Coca-Cola are challenging the design and creative communities to reimagine what it means to push the limits.

“Design has been at the heart of Coca-Cola for 130 years. With Tokyo 2020 as the stage, we are thrilled to collaborate with Adobe, and share our most beloved visual assets with designers and creatives everywhere. Whether you’re an established pro, or an aspiring artist, this opportunity is for everyone,” said James Sommerville, Vice President of Global Design at The Coca-Cola Company.

Coke x Adobe x You brings us all together around one common passion: creating exceptional experiences to benefit a meaningful cause.

Artwork by Birgit Palma.

Here’s what you need to do next

Are you ready to join our talented international community of creatives, and share your artistic vision? If so,

You can also check out the online gallery for artwork and videos from designers around the globe that will inspire you to put your ideas into motion and support Special Olympics.

Whether you’re in graphic design, photography, motion graphics, 3D, or illustration, we want to see what you do best. Visit this page to learn how you can share your vision with the world.

Artwork by Kouhei Nakama.


What’s Next for Adobe XD CC?

Today we announced Adobe XD 1.0, representing a foundational milestone on our continued journey to deliver the complete solution for UX/UI designers, as well as the creative professionals, stakeholders, and developers they work with.Similar to the beta period, we’ll continue to deliver regular updates that bring new capabilities and refinement based on your feedback. Over the next couple of months, expect to see more progress across design, prototyping, and sharing capabilities.


We’re working to provide the essentials in core design tools to remain fast and focused on UX design. Support for underlining text, switching between point and area text, layout grids, and JPG export are coming in the next few months, with further support for adaptive and responsive design, plus tools to help with management and reusability of assets.



Beyond screen-to-screen transitions, you need a set of rich capabilities to truly communicate design intent. You will see progress on overlays (transitioning elements on top of an existing artboard) and scrollable areas within an artboard soon. Following that our goal is to enable higher fidelity micro-interactions and support more gestures and interaction types.


Most design projects involve collaborating with developers for implementation. Creating documentation and outlining specs takes valuable time away from designing. To help this process, you will be able to publish design specs in the next release of XD, enabling developers to view flows and get measurements, colors and styles from your design — all in the browser.

Third-party Integrations

In addition to what’s included with XD, we are enabling integration with third-party tools to extend and customize XD. For the first wave, you can expect to see interoperability with Zeplin and Sympli, providing teams with a choice in design to development workflows. We are also planning to add plug-in support, with the ability for anyone to extend XD with additional tools, commands, and panels.

Thank You

We are excited to release XD 1.0 today, but we are even more excited about its future and how you will use XD to design, prototype, and share your creations. As always, we encourage you to tell us what you think or reach us on Twitter (@AdobeXD).

Thank you for your continued support, and we hope you join us on this journey to build the complete solution for UX design.





Adobe Delivers Modern, Intelligent Design Solutions for Global Brands and Agencies


The path to digital transformation is both exhilarating and daunting. The opportunity for brands has never been greater through digital and mobile customer relationships, but successful digital transformation requires brands to work differently. Becoming an Experience Business requires combining creativity, marketing expertise and data, with much greater collaboration across functions.

Adobe Creative Cloud is a one-stop shop for creativity. In an experience-driven world, this creativity is mission-critical to produce the breakthrough design and experiences required for success – as well as the content velocity required to fuel all the personalized touchpoints that are necessary for your brand reach.

We’ve been talking about the need for experience design and content velocity for a while at Adobe – all of which requires tighter integration and seamless workflows across creative and marketing. Adobe is addressing this with several announcements at the Adobe MAX conference today.

Adobe XD

First off, we are launching a new app today at MAX, Adobe XD CC 1.0, to enable companies to take an experience-driven approach to designing products and digital touchpoints across mobile and web.

Adobe XD is built from the ground-up to quickly prototype and design engaging experiences, whether for mobile, web, or any digital touchpoint. With Adobe XD CC, user experience designers can now quickly go from concept to prototype when designing websites, mobile apps and more. It works across platforms, including Mac, Android and Windows, and helps teams easily collaborate internally as well as externally with agencies.

We also announced that Tata Consultancy Services (TCS), a leading global IT services, consulting and business solutions organization, will standardize on Adobe XD and use it as its main creative solution for UX and UI design internally, as well as with its clients, starting in its Digital Interactive practice.

Adobe Dimension

We’re also launching Adobe Dimension, which gives graphic designers the power of 3D, with the simplicity of working in 2D. Designers can create photorealistic renderings for packaging, product shots and branding. Traditionally, 3D has been difficult and expensive for graphic designers. Adobe Dimension simplifies this with a modern and intuitive user experience and integration with Adobe Stock.

Adobe Stock

Every day, we hear from our customers about their needs to scale creative production – and today they have a powerful new way to scale. We are expanding the Adobe Stock asset collection with hundreds of professionally-created motion graphics templates and new 3D-ASD files for 3D scenes created in Dimension CC.

Creative and Marketing Cloud Integration

Today we’re also announcing new integrations between Creative Cloud and Adobe Marketing Cloud, part of Adobe Experience Cloud, to more tightly connect content and data:

  • Creative Cloud for enterprise and Adobe Experience Manager Assets integration to streamline workflows: We’re previewing  a new integration that is a major step in providing an optimized and unified experience between Creative Cloud for enterprise and Adobe Experience Manager Assets. This next generation in-app integration ensures that creatives and marketers no longer need to toggle back and forth between different apps to access, use, review, and archive design assets. An improved user experience surfaces all relevant design assets and files across an organization without having to know where they reside or to search within different apps for them. The integration will be in private beta by early 2018.
  • Engage in 3D with Dimension CC and Experience Manager: To help marketers leverage 3D designs created in Dimension CC, Adobe is previewing an integration of the tool with Adobe Experience Manager’s enterprise digital asset management and delivery offering. Marketers can seamlessly leverage 3D objects designed in Dimension CC and transform them into images for delivery across touchpoints, including mobile, web, apps, and more.

We are thrilled to be at MAX with our customers and partners, celebrating creativity and the power of design for self-expression and brand expression.

Introducing the Master Artists Motion Graphics Template Collection

Today at Adobe MAX, we unveiled the new Adobe Stock Motion Graphics templates collection. These prebuilt templates have been sourced from some of the world’s best motion graphics designers and are optimized to seamlessly work inside Adobe’s video applications.

We caught up with three acclaimed motion graphics designers, Andrew Kramer, Valentina Vee, and Nik Hill, whose templates are now available on Adobe Stock, about how they started out in the field and why these graphic elements are so crucial to videos and films.

For Andrew Kramer, After Effects veteran and founder of Video Copilot, motion graphics have always had an emotional and memorable appeal. “Some of my earliest memories of title sequences include Superman, Bullitt, and the many great James Bond openings,” he shares. A little stylistic flair goes a long way to make a lasting impression on the viewer, “The way a title moves, the font and color say a lot about a film, its tone, the the message it’s trying to convey.”

In addition to adding memorable flavors to a film, motion graphics provide structure. “A video may be created one shot at a time, but the viewer will perceive it as a whole,” Andrew explains. It’s critical to organize content with clear sections and even infographics to ensure the story is cohesive and informative.

Andrew’s templates on Adobe Stock feature his unique aesthetic and bring his experience in feature films to the forefront.

Download Andrew’s template for free on Adobe Stock.

Valentina Vee realized the importance of motion graphics when she was working on videos for beauty influencer Michelle Phan. “I realized that motion graphics played a big role in how each video was structured and themed,” she explains. Case and point: “You can only do an eyeliner tutorial so many times before the audience gets wary, but by wrapping each video in its own world and creating a completely new motion graphics package for each video, I was able to give each piece its own unique flavor.”

When creating templates for Adobe Stock, Valentina kept Youtubers in mind, and crafted a set of templates that includes an introduction, lower thirds, a chapter divider, a transition, a well as a text box. These cohesive templates are designed to work together so videographers can focus on their content and not worry about creating their graphics from scratch. And unlike traditional After Effects templates, all of the Motion Graphics templates on Adobe Stock are customizable directly inside Premiere Pro so you don’t have to leave your workflow to make edits to the graphic elements.

Download Valentina’s template for free on Adobe Stock.

Growing up, London-based designer Nik Hill was always interested in art and illustration. He was especially drawn to motion graphics because they combined creative elements with technical execution. These days, his work can be seen in Hollywood blockbusters such as Marvel’s Avengers: Age of Ultron, Guardians of the Galaxy, and Jupiter Ascending.

“I’m a big advocate for using stock content to enhance your workflow and cut down working times,” says Nik. Instead of having to build something from the ground up, you can find a high quality template and use it as a jumping off point to customize and create something that’s unique to your project.

Stock templates not only speed up the process of video creation dramatically, they also provide inspiration. Nik’s hope is that perhaps by seeing these elegant motion graphics in the marketplace, editors will feel inclined to take the leap into learning motion graphics and expand their creative and technical skill sets.

Download Nik’s template for free on Adobe Stock.

These designers may have distinct styles and processes, but their goals for creating motion graphics templates are one and the same: to enable creatives to produce captivating videos and inspire the next generation of motion graphics enthusiasts and professionals. See more templates from Andrew, Valentina, Nik, and numerous other talented designers on Adobe Stock.

If you have experience creating and selling motion graphics templates and are interested in selling your work on Adobe Stock, please email us at for more details.


Samsung Completes Qualification of 8nm LPP Process

Samsung Electronics, a world leader in advanced semiconductor technology, announced today that 8-nanometer (nm) FinFET process technology, 8LPP (Low Power Plus), has been qualified and is ready for production.


The newest process node, 8LPP provides up to 10-percent lower power consumption with up to 10-percent area reduction from 10LPP through narrower metal pitch. 8LPP will provide differentiated benefits for applications including mobile, cryptocurrency and network/server, and is expected to be the most attractive process node for many other high performance applications.


As the most advanced and competitive process node before EUV (extreme ultra violet) is employed at 7nm, 8LPP is expected to rapidly ramp-up to the level of stable yield by adopting the already proven 10nm process technology.


“With the qualification completed three months ahead of schedule, we have commenced 8LPP production,” said Ryan Lee, Vice President of Foundry Marketing at Samsung Electronics. “Samsung Foundry continues to expand its process portfolio in order to provide distinct competitive advantages and excellent manufacturability based on what our customers and the market require.”


“8LPP will have a fast ramp since it uses proven 10nm process technology while providing better performance and scalability than current 10nm-based products” said RK Chunduru, Senior Vice President of Qualcomm.


Details of the recent update to Samsung’s foundry roadmap, including 8LPP availability and 7nm EUV development, will be presented at the Samsung Foundry Forum Europe on October 18, 2017, in Munich, Germany. The Samsung Foundry Forum was held in the United States, South Korea and Japan earlier this year, sharing Samsung’s cutting-edge process technologies with global customers and partners.

Making computer science accessible to more students in Africa

Computer Science (CS) fosters innovation, critical thinking and empowers students with the skills to create powerful tools to solve major challenges. Yet, many students, especially in their the early years, do not have access to opportunities to develop their technical skills.

At Google, we believe that all students deserve these opportunities. That is why,  in line with our commitment to prepare 10 million people in Africa for jobs of the future, we are funding 60 community organisations to hold training workshops during Africa Code Week 2017.These workshops will give over 50,000 students a chance to engage with CS and learn programming and computational-thinking skills.

Africa Code Week is a grassroots movement that encourages programming by showing how to bring ideas to life with code, demystifying these skills and bringing motivated students together to learn. Google has been involved in this campaign as a primary partner to SAP since 2015, providing sponsorships to organizations running initiatives to introduce students to CS.

This year, we received more than 300 applications from community organizations across Africa. We worked with the Cape Town Science Centre to select and fund 60 of these organizations that will deliver CS workshops to children and teens (ages 8 to 18) from October 18-25 in 10 African countries (Botswana, Cameroon, Ethiopia, Ghana, Kenya, Lesotho, Nigeria, South Africa, Gambia and Togo).

Some of the initiatives we are supporting include:

Google is delighted to support these great efforts. Congratulations to the recipient organizations. Step into the world of Google in Computer Science Education at

Thousands of Apps for Any Task

3rd Party App Ecosystem for Creatives featured at MAX

As the creative community gathers in Las Vegas for this year’s Adobe MAX conference, customers will flock to the Adobe Make It Experience booth in the Community Pavilion. This is where they will get a first peak and hands on demos of the latest products and enhancements to the Creative Cloud that will be announced at MAX.

A small team in the center of the booth will be showcasing a growing ecosystem of 3rd party apps that extends the functionality of Creative Cloud to help you in your everyday challenges.

Designed to complement your Creative Cloud workflow and fill in gaps with specialized features for the task at hand, there’s little you can’t do on the Creative Cloud platform. From creating content when you’re not in front of a computer, to shaving off a few minutes of time doing repetitive work, or keeping feedback in a central place, there’s a 3rd party app for the task!

Whether you’re a Photoshop or Premiere Pro user; a freelancer or part of a large enterprise team, come visit us at MAX in the Make it Experience booth. The Partner Ecosystem team will be there to help you find the right app that seamlessly connects to your Creative Cloud workflow.

We’ll be featuring 3rd party Creative Cloud Connected apps from the esteemed partners above. Apps from some of these partners and thousands more can be found on Adobe Exchange, our app marketplace.

Live Stream Series | Make Good Videos GREAT with Motion Graphics

Earlier this year, Jason Levine launched a 7-part live stream series on How to Make Great Videos. In the series, Jason walks through best practices for creating video content – from importing footage to sharing with the world.

On October 27th, Jason will kick off a new 3-part series of interactive live streams to help you take your video skills to the next level.

Jason will demonstrate LIVE on the Adobe Creative Cloud Facebook page each Friday at 9amPT/12pmET/6pmCET throughout this series. Get notified when streams go live by following the Facebook page or signing up for the events: Motion Graphics, Audio, and Color. Bookmark this video playlist on the Adobe Creative Cloud YouTube channel for replays of every stream with timestamped chapters noted in the descriptions.

Here’s what you’ll learn in the first series on using Motion Graphics:

Week 1: Motion Graphics in After Effects CC Live on Facebook October 27, 2017 at 9amPT

Learn how to build Motion Graphics templates in Adobe After Effects CC; how to streamline your After Effects workflows using the new Essential Graphics panel; how to leverage flexibility with expressions; and how to share adjustable Motion Graphics templates (.mogrt files) to Premiere Pro CC.

Adobe tools in the spotlight: After Effects, Premiere Pro, Stock

Week 2: Motion Graphics in Premiere Pro CC Live on Facebook November 3, 2017 at 9amPT

Get an introduction to native graphics workflows in Premiere Pro CC, then discover the power of the Essential Graphics panel. Dig into the Type Tool, Shape Tools, and Master Text Styles. Learn how to bend time and space with new Responsive Design features that allow you to make dynamic adjustments to the duration and position of graphics without losing keyframes.

Adobe tools in the spotlight: Premiere Pro, Stock

Week 3: Putting It All Together – Motion Graphics in Premiere Pro CC and After Effects CC Live on Facebook November 10, 2017 at 9amPT

Dive deeper into Responsive Design and what matters most when building and exporting Motion Graphics templates in Premiere Pro for working in differently sized sequences, either for your own future reuse and/or sharing with others. Learn how to copy and paste layers from Premiere Pro graphics into After Effects, and how pinning relationships become parenting with Pick Whip.

Adobe tools in the spotlight: Premiere Pro, After Effects, Stock

Next up…

Now that your videos have stunning graphics, learn how to boost the Audio & Color quality in the next series. Sign up now for reminders when these sessions go live on Fridays at 9amPT:

Audio: December 1 – December 15, 2017

Color: January 12 – January 26, 2017

Join Us for a Meeting of the Minds and Machines

Next week Cisco will be exhibiting at the 2017 GE Minds + Machines conference in San Francisco (October 25 and 26). Cisco is a platinum sponsor, and if you’ve never attended this conference, it’s one of the premier Industrial IoT (IIoT) events. Topics of discussion will include: IIoT architecture IIoT data management and analytics Industrial control […]
Scroll Up