According to research by Houzz, an online platform for home remodeling and design, 28% of homeowners consider the integration of smart technology a very
Samsung Electronics was founded on the belief that the company’s most valuable asset is its people and that belief still stands true today. It’s not just
Inside today’s digital devices that are becoming ever sleeker, smaller, smarter and more powerful, advanced semiconductors are the key building blocks to this
From the US to Australia, Spain to Malaysia, Samsung Galaxy Studios are continuing to provide visitors with unique experiences in venues scattered across the
Driving is an essential part of our daily activities. So at Google we spend a
lot of time thinking how we can make Android devices better and safer for our
users. How we can prevent distracted driving and together build an open
ecosystem to enable safety first smartphone experiences.
Recently we launched Driving Do-Not-Disturb on the newly announced Pixel 2
generation of devices. Once enabled, Driving Do-Not-Disturb automatically puts
your device into a do not disturb mode while driving. During this mode any
incoming messages and notifications are silenced while you can still receive
incoming calls, navigation directions and voice interactions using a connected
Car bluetooth. The product is designed to limit distractions during driving
while at the same time not getting in the way so users can continue to use
navigation or other similar apps with minimal friction.
Behind the scenes, it uses AI powered on-device Activity
Recognition that detects when a person is driving using low power signals
from multiple sensors, bluetooth and WiFi. Activity Recognition uses the Android
Sensor Hub to ensure low latency, low power and accurate driving detection.
This is a next step in our journey, but we are far from done. Early next year
we are introducing the Activity Recognition Transition API, which is the same
API used by Driving Do Not Disturb to build distraction-free driving
We appreciate the feedback, and will continue to listen to your feedback as the
If you have questions about setting up the Driving Do-Not-Disturb, check out our
Samsung Electronics, a global leader in technology, today announced that 36 of its latest products have been recognized as CES®2018 Innovation Awards
Facebook Community Boost will visit 30 US cities in 2018, including Houston, St. Louis, Albuquerque, Des Moines and Greenville, South Carolina.
The ability to see something on a different scale often offers a new perspective. Launched with the Galaxy S8 and S8+ and now available with the Galaxy Note8,
On November 7, Samsung Vietnam hosted the first “Samsung Vietnam Workplace Environment & Safety Innovation Conference” at Samsung Electronics
Samsung Electronics, a world leader in advanced digital component solutions, today announced a new family of chip-on-board LED lighting packages, labeled the
Posted by Kate Brennan and Mathilde Cohen Solal, Google Play and Daraiha Greene, CS in Media
Google Play is committed to empowering new and existing voices in gaming. Earlier this year, we hosted the Indie Games Festival and sponsored the Girls Make Games summer camp. We also announced a collaboration between Infinite Deviation and Google Play.
Infinite Deviation is an initiative created by Google Computer Science (CS) in Media and Ideas United in order to tackle issues of representation in computer science. The collaboration between Google Play and Ideas United brought the Infinite Deviation program to gaming, called Infinite Deviation: Games. The program invited game designers from all backgrounds to pitch an original mobile game concept that resonates with underrepresented audiences.
Today we are excited to announce the three teams selected for the Infinite Deviation: Games development program.
A select panel of industry experts reviewed applications and chose the top three ideas. The judging panel included Colleen Macklin (Founder and Co-Director, PETLab), Jeremy Vanhoozer (Senior Creative Director, Plants vs Zombies), Molly Proffitt (CEO, Ker-Chunk Games), Shirin Laor-Raz Salemnia (Founder and CEO, PlayWerks), and Sarah Thomson (Global BD Lead, Indies, Google). These judges scored and delivered personal feedback for each submission, with the three highest scoring games moving into further development.
Here’s a closer look at the three games we’ll be funding and supporting development over the next six months:
Historic Gay Bar Tycoon
Mo Cohen & Maria Del Castillo Infantas – Queermo Games
Historic Gay Bar Tycoon (name pending) starts you off with a brand new queer bar in the 1920s. This game explores the role bars played in LGBT history. Will your bar survive revolutions, epidemics, and the rise of dating apps?
Queermo Games is pretty much what it sounds like: a scrappy and small indie game developing team just trying to make more LGBT games. Conveniently, they are also next door neighbors. Maria is a queer Latina who handles the art and the music, and Mo is a non-binary Jewish queer who tackles the programming and writing. Together, they also work on another longer-term project called Queer Quest with their buddy Hagen.
Harrison Barton & Morgan Rowe – Pride Interactive
Burn Ban is an interactive visual novel in which you assume the role of Twig, a mentally ill queer girl. After showing destructive tendencies to cope with the death of a close friend, she is sent to Camp Sisquoc, a summer retreat for misguided students. After attending the camp for a few days, her dead friend’s online social media page mysteriously starts posting again, and Twig and friends are set with determining the mystery behind the posts.
Pride Interactive is currently made up of two developers, Harrison Barton and Morgan Rowe. Pride Interactive was started as a student game team, and is now continuing on to develop independent projects. Pride Interactive endeavors to further their mission of creating a more diverse environment in the industry through games that deal with serious themes, and diverse character driven narratives.
Ghost in the Graveyard
Adnan Agha, Vivian Allum, and Armand Silvani – Ghost Stories
Ghost in the Graveyard is a story-driven mobile mystery game where you snoop through your missing brother’s old phone to try and find him. “Can you find a missing person when all you have is their phone?”
Ghost Stories is a three member team from NYC with a mission to create genuine and unique experiences that connect with players. The team consists of Vivian, the lead designer and programmer, Armand, the artist and writer, and Adnan, the programmer and resident ghost. They’ve previously published a school project to the Xbox One and are thrilled to be able to work with Infinite Deviation to publish to Google Play.
You can find more information about the program at InfiniteDeviation.com/Games. Congratulations to the three winners and thanks to all the people who have entered the competition for their continuous work to push the boundaries of gaming design and providing a unique voice to the industry.
Introducing peer-to-peer payments to people in the UK and France.
Samsung Electronics announced today that, in partnership with Green Development SA, it will deliver 10,000 bioethanol stoves to 10,000 households in Mombasa,
Éste blog te ayudará a organizar tu visita a Cisco Live Cancún 2017. Revisa la lista de presentaciones y demos del data center, luego márcalas en tu calendario o guarda las imágenes para que no te pierdas ninguna sesión que te interese. Sesiones ¿Cómo poder aprender toda la información de este show en tiempo limitado? […]
Posted by Lukas Bergstrom, Product Manager, Android Developer Frameworks Team
Android runs on billions of devices, from high-end phones to airplane seatbacks. The Android OS manages resources aggressively to perform well on this huge range of devices, and sometimes that can make building robust apps complicated. To make it easier, we launched a preview of Architecture Components at Google I/O to provide guidance on app architecture, with libraries for common tasks like lifecycle management and data persistence. Together, these foundational components make it possible to write modular apps with less boilerplate code, so developers can focus on innovating instead of reinventing the wheel – and we hope to keep building on this foundation in the future.
Today we’re happy to announce that the Room and Lifecycle Architecture Components libraries have reached 1.0 stable. These APIs are ready for production apps and libraries, and are our recommendation for developers looking for help with app architecture and local storage (although they’re only recommended, not required.) Lifecycles are now also integrated with the Support Library, so you can use them with standard classes like AppCompatActivity.
Although we’re declaring them stable today, the beta components are already used in apps that together, have billions of installs. Top developers, like Zappos, have been able to spend more time on what’s important thanks to Architecture Components:
Prior to the release of Android Architecture Components we had our own ViewModel implementation. We used Loaders and Dependency Injection to persist our ViewModel through config changes. We recently switched to the Architecture Components ViewModel implementation and all that boilerplate went away. We found that we were able to spend more time on design, business logic and testing, and less on writing boilerplate or worrying about Android lifecycle issues.
We’ve also started to use LiveData which hooks directly into the Activity lifecycle. We use it to retrieve and display network data and no longer have to concern ourselves with network call subscription management.
– David Henry, Android Software Engineer, Zappos
Architecture Components provide a simple, flexible and practical approach that frees developers from some common problems so they can focus on building great experiences. This is based on core building blocks tied together by guidance on app architecture.
Every Android developer has to deal with the operating system starting, stopping and destroying their Activities. That means managing the state of components – such as observables used to update UI – as you move through the lifecycle. Lifecycles enables the creation of lifecycle-aware components that manage their own lifecycles, reducing the possibility of leaks or crashes. The Lifecycle library is the foundation for other Architecture Components like LiveData.
LiveData is a lifecycle-aware observable that holds data and provides updates. Your UI code subscribes to changes and provides LiveData a reference to its Lifecycle. Because LiveData is lifecycle-aware, it provides updates when its Lifecycle is started or resumed, but stops providing updates when the LifecycleOwner is destroyed. LiveData is a simple way to build reactive UIs that are safer and more performant.
ViewModel separates ownership of view data and logic from lifecycle-bound entities like Activities and Fragments. A ViewModel is retained until its associated Activity or Fragment is disposed of forever – that means view data survives events like a Fragment being recreated due to rotation. ViewModels not only eliminate common lifecycle issues, they help build UIs that are more modular and easier to test.
Nearly all apps need to store data locally. While Android has bundled SQLite with the platform since version 1, using it directly can be painful. Room is a simple object-mapping layer that provides the full power of SQlite with less boilerplate. Features like compile-time query verification and built-in migration make it easier to build a robust persistence layer, while integration with LiveData lets Room provide database-backed, lifecycle-aware observables. Room blends of simplicity, power and robustness for managing local storage, and we hope you give it a try.
Guide to App Architecture and more
Last but not least, we created a Guide to App Architecture that provides core principles applicable to all developers, and specific guidance on using Architecture Components together. Because we’ve heard from you that clear and consistent guidance is important, today we’re updating developer documentation to point to Architecture Components where appropriate. We also have a rich set of videos, codelabs and sample apps available at the Architecture Components site, with more to come.
Watch this space
Although the first set of Architecture Components is now stable, we know there’s more work to do. Over the last few months, we’ve listened to your feedback and made improvements. We also recently launched a new Architecture Component, PagedList, to alpha, in response to your feedback that handling large datasets with RecyclerView is too difficult. This is just the beginning – we have more major components under development that we’re looking to announce in the upcoming months.
Our hope with Architecture Components is to free developers to focus on providing unique new experiences for mobile devices. We’re glad we can finally announce them as stable for production use. We’d like to thank the community, which has given such great feedback along the way, and we look forward to continuing the discussion in the comments of this post. Finally, for those of you who’ve been waiting for this stable launch, get started today.
Our partners are incredibly important in helping our customers maximize the value of their cloud investments. Today, we’re announcing a first-of-its-kind strategic partnership with Salesforce that spans Google Cloud and Google Analytics to enable smarter, more collaborative experiences for our customers. As cloud-native companies, our partnership offers a unique opportunity to turn marketing, service and sales data into actionable insights and better business outcomes.
The new collaboration leverages the full value of Google Cloud. Salesforce has named G Suite as its preferred email and productivity provider. In addition, Salesforce plans to use Google Cloud Platform (GCP) for its core services as part of the company’s international infrastructure expansion.
Our teams are working very closely to develop new integrations that will connect Salesforce CRM with G Suite to offer the only cloud-native collaboration platform of its kind. These integrations will enable companies to surface powerful intelligence about your customers from Salesforce directly within Gmail, Sheets, Calendar, Drive, Docs and Hangouts Meet. Here’s some more on what you’ll be able to do:
Salesforce Lightning for Gmail: Surface relevant Salesforce CRM data in Gmail, as well as customer interactions from Gmail directly within Salesforce, to service your customers faster. Identify high priority emails and suggest next steps based on the email content to work with customers faster than before.
Salesforce Lightning for Google Sheets: Embed Sheets anywhere in Salesforce, and with a single click push content from Salesforce Records or Reports to a new Sheet. Data will automatically update bi-directionally to ensure everyone has the most recent information.
Quip Live Apps for Google Drive and Google Calendar: Quip Live Apps will integrate with Google Drive and Calendar, allowing you to access information in a more collaborative, open cloud environment, and embed any list of Drive files, including Google Docs, Slides and Sheets, or your Google Calendar inside Quip. This new combination of Quip and G Suite will create a modern alternative to legacy intranet content services.
Salesforce for Hangouts Meet: Access relevant customer and account details, service case histories and more from Salesforce CRM directly within the Hangouts Meet interface. This gives you powerful insights directly in the communications platform to conduct better sales conversations or efficiently resolve customer service issues.
To help take advantage of the combined Google and Salesforce experience, qualified Salesforce customers can receive G Suite for up to one year at no additional cost—restrictions apply, and more details can be found on our site.
We hope this partnership enables more companies to take advantage of the cloud and that the combined solutions will provide an unmatched experience for customers. In fact, our team at Google Cloud uses Salesforce as our preferred CRM provider to engage with our customers in meaningful ways.
As for availability, several integrations between G Suite and Salesforce are already in market, including Lightning for Gmail and integrations with Calendar and Google Drive. The deeper integrations we’ve announced are expected to start rolling out in 2018.
Creating an integration between two software platforms sounds complex, right? Well, it can be much easier than it looks. In this blog, you can have Salesforce Cloud CRM tightly integrated with Cisco Cloud Collaboration to digitize customer and sales workflows, bringing together experts, customers, data and workflows into a seamless, powerful real-time collaboration experience. What […]
Anyone who’s a fan of the amazing waves on Hawaii’s North Shore has likely heard of Clark Little. A lifelong surfing enthusiast, at the age of 37 Little left his steady job, bought a camera, and started exposing the inside of the North Shore’s massive waves.
From dynamic signage to mobile innovations, to virtual reality and artificial intelligence enabled interactions, and now IoT; technology trends continue to reshape consumer and fan behaviors. Today it isn’t enough for fans to go to the game. They want an unforgettable game-day experience, from the moment they purchase their ticket, to the time the event […]
Maybe you couldn’t make it in person to MAX 2017, or maybe you want to review an inspiring session or keynote. In either case, we’ve got you covered — many of our keynotes and sessions are now available to watch online. So much happened at MAX this year — the amount of amazing content was […]
Adobe Stock is looking to natural and modern seasonal imagery, videos and more.
With portrait mode on the Pixel 2 and Pixel 2 XL, you can take pictures of people, pets and even objects like flowers that keep what’s important sharp and in focus, but softly blur out the background. Portrait mode is powered by computational photography and machine learning, which identifies what to keep in focus and what to blur out. We’ve put together some tips to help you make the most of the new feature. Check it out—you’ll be a master portraitist in no time!
Get closer. This is the most important tip for getting great portraits. The less distance between you and your subject, the more likely your photos will have beautiful blur. Compare the photos on the right, below, with those on the left which are taken from farther away.
Increase distance between your subject and the background. The farther your subject is from the background, as on the image on the right below, the more the background will be blurred. In the left image, the background isn’t far enough away.
Tap that. For the best results, tap to focus the Pixel 2 on your subject, whether a person’s face or an object. Tapping also tells the Pixel 2 what’s most important to you in the photo and adjusts the exposure to prioritize your subject. This is especially useful when your subject has strong light (the sun or windows) behind them. Remember—you can always adjust the exposure by tapping on the screen and dragging your finger up or down.
Put the subject in the front. Seems obvious, right? But you’ll get more blur and beautiful bokeh if your main subject stands out, is prominent, and is clearly located in the foreground, like the image on the right below, not the middleground, as on the left.
Change your perspective. A unique angle adds visual interest to your photos, as in the examples below. Get low to match the eye level of a child or an animal, or shoot from above to emphasize shapes and graphic components in an object.
Remember the rule of thirds. Pixel 2 makes it easy to follow this classic photography recommendation. Tap the grid icon in the camera app to activate the 3×3 grid on your Pixel 2’s screen. Placing the subject along one of the lines or where the lines intersect can improve your composition.
Keep it simple. A photo’s success can be compromised if too many details compete for attention with your subject. You can avoid this fate by filling the frame and shooting in front of clean backgrounds. And don’t forget to check the outside edges of your photo before you press the shutter button to prevent clutter from protruding into the sides of your picture.
Get in line! This is one of our favorite tips. Look for lines, like bricks or tiles, staircases, or a building’s grid. Lines can enhance the sense of depth in photos and emphasize your subject.
Lighting is everything. You can’t always control lighting conditions, but you can find better light. Outdoors, find better light by changing your location or the camera’s orientation, repositioning subjects, or taking photos in the shade (especially at midday). Clouds are your friend, creating a more soft, diffuse light. At night, try lighting up your subject from the side with a friend’s phone in flashlight mode.
Ditch the crowds. Portrait mode works best when all your subjects are the same distance from the camera. This is easier with small groups of people.
We’d love to see how you put these tips to work. Share your Pixel 2 portraits on social media with #teampixel—we may feature them in one of our upcoming posts!
Attendees at Cisco LIVE LatAm 2017 will be able to participate in the Cisco Spark Starship Team Challenge and compete for bragging rights and fun prizes.
Thriving in a digital world requires agility. To bring this concept to life, members of my team created a Digital Business Agility (DBA) framework.
Get ready everyone, Cisco IT is heading to Cancun, Mexico to host the IT Management Program! Cisco Live Cancun is just around the corner and it’s never felt so real! More than 5,000 attendees and 70 sponsors will be joining the Cisco family in enjoying some of the coolest technical talks and inspiring keynote presentations […]
As 2017 draws to a close, organizations should consider how current tax benefits and flexible payment options provide a lucrative time to align tech investments with business goals.
There are a lot of common UX design myths circulating around, and sometimes we just don’t have the time to stop and reflect on how true they really are. So today, we tackle 12 of the most common myths of UX design and try to dispel them.
As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what’s happening around us. Whether walking along a busy city street or attending a packed music concert, we’re able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you’re really there. Today, we’re releasing a new spatial audio software development kit (SDK) called Resonance Audio. It’s based on technology from Google’s VR Audio SDK, and it works at scale across mobile and desktop platforms.
Performance that scales on mobile and desktop
Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We’re also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.
Multi-platform support for developers and sound designers
We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we’ve released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that’s destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.
Model complex sound environments
By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.
Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener’s head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We’ve also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisionic soundfield playback is supported, from game engines to YouTube videos.
If you’re interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site. You can also experience spatial audio in our Audio Factory VR app for Daydream and SteamVR. Let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we’ll be resharing our favorites.
By Hossam Tewfik, Product Manager Given the positive feedback and increased monetization seen from our testing of recirculation ads launched in June, we are now enabling publishers who currently use Facebook’s Audience Network to easily turn these ads on or…
Last week saw Cisco’s Partner Summit taking place in Dallas, Texas – a fantastic opportunity to spend time with partners, analysts, press and Cisco employees and collectively discover how we can all ‘OWN IT’ – the theme of the conference. ‘The Big D’ is a city famous for many things – it’s where the frozen […]