Samsung Electronics, a world leader in advanced semiconductor technology, today announced its new narrowband (NB) Internet of Things (IoT) solution, Exynos i S111.
The new NB-IoT solution offers extremely wide coverage, low-power operation, accurate location feedback and strong security, optimized for today’s real-time tracking applications such as safety wearables or smart meters. The solution includes a modem, processor, memory and Global Navigation Satellite System (GNSS) into a single chip design to enhance efficiency and flexibility for connected device manufacturers.
“IoT will be able to evolve to offer new features beyond the conventional household space with IoT-dedicated solutions that present a broad range of opportunities,” said Ben Hur, vice president of System LSI marketing at Samsung Electronics. “Exynos i S111’s highly secure and efficient communication capabilities will bring more exciting NB-IoT applications to life.”
As IoT grows to be a part of our everyday lives, some connected devices share useful information instantly in high volumes, but some transmit data in small nuggets over a long period of time. Popular radio connectivity systems such as Bluetooth and ZigBee are suitable for short-range scenarios within confined spaces such as in the home or a building, and broadband communications are commonly used for mobile devices that demand high data rates. On the other hand, NB-IoT supports applications that require reliable low-power communication and wide-range coverage for small-sized data.
To cover long distances with high reliability, as a standard, NB-IoT adopts a data retransmission mechanism that continuously transmits data until a successful transfer, or up to a set number of retransmits. With a high number of these retransmit sessions, the S111 is able to cover the distance of 10-kilometers (km) or more.
Exynos i S111 incorporates a modem capable of LTE Rel. 14 support that can transmit data at 127-kilobits-per-second (kbps) for downlink and 158kbps uplink, and can operate in standalone, in-band and guard-band deployments.
For long standby periods, the S111 utilizes power saving mode (PSM) and expanded discontinuous reception (eDRX), which keeps the device dormant for long periods of time of 10 years and more, depending on application and use-cases. Exynos i S111 also has an integrated Global Navigation Satellite System (GNSS) and supports Observed Time Difference of Arrival (OTDOA), a positioning technique using cellular towers, for highly accurate and seamless real-time tracking.
Transmitted data are kept secure and private with the S111, as the solution utilizes a separate Security Sub-System (SSS) hardware block along with a Physical Unclonable Function (PUF) that creates a unique identity for each chipset.
Following the successful launch of the company’s first IoT solution, Exynos i T200, in 2017, Samsung plans to continue expanding the ‘Exynos i‘ lineup with offerings especially tailored for narrowband networks.
For more information about Samsung’s Exynos products, please visit http://www.samsung.com/exynos.
It’s a common mistake in enterprises to copy-paste security solutions from a peer. Strategies can be recycled, but sadly with even very similar businesses almost always have radically different IT and security requirements. I recall one hospital that looked at a nearly identical peer hospital that was only a few miles away. Much of the technology was similar, even down to the IT products (SAP, O365). Digging in they realized the differences in custom apps, how they patched, how contractors serviced them, and the capabilities of their security staff meant they’d need their own security architecture. If even similar organizations in the same horizontal have particular security needs, it only emphasizes that small and medium businesses (SMB), enterprises, and carriers usually have unique needs. Some of these differences are easy to find: some enterprises can dictate that an endpoint protection platform (EPP) be deployed on connecting devices, whereas a carrier can’t dictate that to customers, or an SMB with 200 endpoints has different management console requirements than a carrier monitoring 50,000 through intermediaries such as partners and affiliates. Carriers also have a much higher sensitivity to false positives while having to accommodate incredible degrees of heterogeneity: stepping on a customer’s legitimate interaction is rated much differently for a carrier than in most enterprises or SMBs.
Carrier security is changing in response to very different forces. 5G promises to be a technology as significant to carriers as cloud as has been to enterprises. Like enterprises, IoT presents to carriers and telcos vulnerabilities and issues of scale but the scale issues of carriers are often a magnitude greater and the impact of a vulnerability allowing a backdoor from the internal carrier network is not just a risk of security but one of brand reputation and potentially safety.
Network function virtualization (NFV) has been a big deal in cloud and data center security, and is also so for carriers but in a different way. With the first iterations of virtualization and cloud, enterprises weren’t able to address the virtual network directly and implement their own security as add-on services and were usually left with the choice of only using what the CSP or hypervisor vendor provided, or constructing inefficient hair-pinned flows to workloads simulating a netsec delivery. CSPs and hypervisor vendors reacted to allow enterprise customers and 3rd parties access to virtual network functions and switches in order to provide network access and virtualized security. Carriers also went through a parallel path. Carriers had access to NFV but the scale meant that the enterprise NFV security solutions weren’t useable, and most weren’t even close to securing the types of equipment that carriers deployed, nor in the way they were secured. Like cloud and hypervisor NFV security options opening up, carrier grade NFV security evolves as well, and is in fact being jumpstarted by 5G and IoT. And how NFV security is provided to customers is unique in carriers, with types of security being value-added services that require mechanisms for provisioning, updating and monitoring, often by partners.
The bottom line is that carrier security increasingly becoming specialized to the carrier market, and carrier grade security means using carrier grade security solutions rather than repurposed security from other verticals and hortizontals.
You can read more about this week’s announcement for carrier grade security and Trend Micro here.
The post Carrier Grade Security Means … Using Carrier Grade Security appeared first on .
Take a look around your house, office or even the next store you visit, and you’ll start to notice that internet-connected devices are bringing us closer than ever before to a world of ubiquitous computing and ambient intelligence. As these Internet of Things (IoT) devices become increasingly commonplace, people will start to expect computing to be more integrated into their lives, to anticipate, understand and seamlessly meet their needs. They will expect software to respond to spoken natural language, gestures, body language and emotion, and for it to understand the physical world and the rich context surrounding each user as they navigate their personal life, their work and the world around them.
This trend has more promise than just bringing additional convenience, productivity and connections to our everyday lives. Smart sensors and devices are breathing new life into industrial equipment from factories to farms, helping us navigate and plan for more sustainable urban cities and bringing the power of the cloud to some of the world’s most remote destinations. With the power of artificial intelligence (AI) enabling these devices to intelligently respond to the world they are sensing, we will see new breakthroughs in critical areas that benefit humanity like healthcare, conservation, sustainability, accessibility, disaster recovery and more.
We call this next wave of computing the intelligent edge and intelligent cloud. When we take the power of the cloud down to the device – the edge – we provide the ability to respond, reason and act in real time and in areas with limited or no connectivity. As Satya shared at our Build developer conference, it’s still early days, but we’re starting to see how these new capabilities can be applied towards solving critical world challenges:
We need to give all organizations and developers the tools to build these kinds of increasingly ambitious solutions that span the intelligent edge and intelligent cloud. Moreover, these tools must give developers strong security foundations and help them to place security at the very core of their solutions. Devices on the edge handle some of our most sensitive business and personal data in our homes, workplaces, and sometimes in physically remote places.
To protect data wherever it lives, security needs to be baked in from the silicon to the cloud. This has been one of the central design principles of Microsoft’s intelligent edge products and services. Azure Sphere is our intelligent edge solution to power and protect connected microcontroller unit (MCU)-powered devices. There are 9 billion of these MCU-powered devices shipping every year, which power everything from household stoves and refrigerators to industrial equipment. With more processing power than traditional MCUs and a holistic security approach, we believe Azure Sphere will make our increasingly connected world safer. In addition, Azure IoT Edge enables you to run cloud intelligence directly on IoT devices and includes security from device provisioning and management to hardware and cloud services that run on top of the devices. Azure Stack, just one of our many tools to power hybrid scenarios, offers customers the flexibility to securely deploy in the cloud, on-premises or at the intelligent edge.
In the past three months, we introduced Azure Sphere at RSA; announced a powerful application developer experience with Visual Studio for Azure Sphere to accelerate innovation at the outer edge, as well as new IoT edge capabilities and partnerships at Build; and shipped Azure IoT Edge general availability last month. This is all part of our commitment to intelligent edge innovation and our broader $5 billion investment in IoT to empower our customers and partners. We have more exciting updates around the corner and look forward to seeing what our customers and partners build.
The post The next wave of computing is the intelligent edge and intelligent cloud appeared first on The Official Microsoft Blog.
The question of how AI technologies understand human dialog and queries to suggest an optimum answer is one of the hot topics in the AI industry. Jihie Kim, Head of the Language Understanding Lab at Samsung Research AI Center, is also striving to develop the technology behind an AI algorithm that can talk with people naturally and propose solutions to a problem.
The Language Understanding Lab led by Dr. Kim recently grabbed global attention after placing top ranks at global machine reading comprehension competitions held by Microsoft and the University of Washington, respectively. Samsung Newsroom visited the Samsung Research AI center in Seocho-gu, Korea to interview Dr. Kim about AI performance in the machine reading comprehension competitions and a future evolution plan for AI algorithms.
Q. Please tell us about the MS MARCO and TriviaQA competitions held by Microsoft and the University of Washington, respectively, where your team ranked first place.
Kim: There have been many global machine reading competitions recently where AI presents solutions to a problem. MS MARCO and TriviaQA are among the top five global competitions in machine reading comprehension. AI algorithms are tested on whether they can understand and analyze questions to offer answers. Those tests are designed by referring to internet users’ queries and search results.
Q. What do you think was the critical factor in excelling at the AI competitions which require such high levels of technical expertise?
Kim: The ConZNet algorithm developed by the Language Understanding Lab at Samsung Reseach is upgrading its intelligence by considering real user environments. The algorithm takes natural language into account such as how people deliver queries and answers online. We were able to win those competitions because the MS MARCO and TriviaQA competitions are about AI capabilities in real user environments. In truth, our algorithm was a bit behind other competitors in tests requiring a simple answer to a question after analyzing a short paragraph. But because such technologies have low relevance to real environments using AI technologies, we are focusing on the other tests such as MS MARCO in proceeding with continuous R&D.
Q. Do you apply the winning algorithms to customer services in real life?
Kim: An Open Lab event was held recently to introduce the labs at Samsung Research to other departments in Samsung Electronics. At the event, we had in-depth discussions with engineers in our home appliances and smartphone departments about AI algorithms. Departments dealing with customer services also showed high interest in what we do because AI-based customer services including chatbots are emerging as a hot topic. We hope that our technologies developed at Samsung Research will be naturally adopted to Samsung Electronics products and services.
Q. What is your future evolution plan for advancing AI technologies in language understanding?
Kim: ConZNet is an acronym for “Context Zoom-in Network.” The name implies that understanding the context of what people say is critical. We need to advance AI technologies to help them understand and analyze short sentences. AI algorithms also need to have capabilities to analyze real-time news reports rather than existing data to give answers to customer queries. We are also developing technologies where an AI algorithm can answer, “there are no proper answers to your query,” as well as search for right answers. The so-called “rejection problem” is an AI technology with a high level of technical difficulties.
Q. Please tell us your ultimate goal in developing AI technologies.
Kim: The strengths of Samsung in the AI industry are that we can build a knowledge system about connections between machines and applications, and customer demands in the internet of things (IoT) environment comprised of personal devices, based on Samsung Electronics’ diverse product lineup. This will help us to achieve the goal of realizing a user-oriented AI system by collaborating with global partners in the industry. Samsung Electronics recently began to launch global AI Centers and we will lead the effort of working with AI experts at the new centers abroad.
Samsung Electronics and Mobile TeleSystems (MTS), Russia’s largest telecommunications operator and digital services provider, announced that they used Samsung’s 5G network and devices to successfully demonstrate a series of 5G scenarios including HD video calls, ultra-low latency video games and high-definition video streaming.
The demonstration zone was set up in the exhibition hall of the Popov Central Museum of Radio Communications, one of the world’s oldest museums of science and technology, in St. Petersburg. Guests and journalists were given the chance to make HD video calls and play a football simulation video game on Samsung’s 5G prototype tablets. 5G provides an unprecedented mobile video game user experience by dramatically reducing the response rate between devices.
The demonstrations also included streaming of 4K ultra-high definition video. These trials utilized Samsung’s 5G end-to-end solutions including 5G routers (CPE, Customer Premise Equipment) and prototype tablets, 5G radio access unit, virtualized RAN and virtualized core network.
“Our goal is to adapt new technologies to commercial use in cooperation with industry-leading vendors. Today’s trial with Samsung Electronics demonstrates that 5G is not an academic theory, but presents a nearly ready set of practical network solutions that will allow customers to manage a broad range of everyday tasks and open new opportunities that are unachievable on 4G,” commented Pavel Korotin, Director of St. Petersburg and Leningrad Region, MTS.
“Today’s tests demonstrated the readiness of MTS’s network infrastructure for the deployment of 5G. Samsung is pleased to have MTS as our partner to together explore the capabilities of 5G that are essential to unlocking near-term use cases for both customers and enterprises,” said Seungsik Choi, Vice President of Samsung Electronics Russia.
Samsung and MTS began their cooperation in 2014 with the launch of LTE networks in the Northwestern Federal District of Russia, including St. Petersburg and the Leningrad region. In the second half of 2017, Samsung and MTS signed an agreement to expand and upgrade MTS’ network with LTE-Advanced Pro and IoT features. The companies will also continue their collaboration on 5G.
About Mobile TeleSystems PJSC
Mobile TeleSystems PJSC (“MTS” – NYSE:MBT; MOEX:MTSS), the leading telecommunications group in Russia and the CIS, provides a range of mobile, fixed-line and digital services. We serve over 100 million mobile subscribers in Russia, Ukraine, Armenia, and Belarus, and about 9 million customers of fixed-line services, including fixed voice, broadband internet, and pay-TV. To keep pace with evolving customer demand, MTS is redefining what telecommunications services are by offering innovative products beyond its core network-related businesses in various tech segments, including Big Data, financial and banking services, internet of things, OTT, cloud computing, systems integration and e-commerce.
We leverage our market-leading retail network as a platform for customer services and sales of devices and accessories. MTS maintains its leadership in the Russian mobile market in terms of revenue and profitability. MTS is majority-owned by Sistema PJSFC. Since 2000, MTS shares have been listed on the New York Stock Exchange and since 2003 – on the Moscow Exchange. For more information, please visit: www.mtsgsm.com.
At the 2018 Electronic Entertainment Expo (‘E3’) in LA this June, Samsung QLED TVs were exhibited as part of Microsoft’s Xbox booth. The 75-inch QLED TV screens caught the attention of keen gamers, with a long queue forming in order to enter the showroom. Players enjoyed an unparalleled 4K HDR gaming experience, featuring stunning graphics and quick response times, with the QLED.
Samsung’s 2018 QLED TVs have evolved to merit the title of the ‘monster’ of the gaming industry. They offer gamers a low input lag of no more than 15ms (0.015 seconds), a rich viewing experience achieved by 100% color volume and High Dynamic Range (HDR) 2000, and Radeon FreeSync for the seamless playing of even the most demanding games.
Input lag can be a critical issue for those players who want to enjoy fast-paced games like first-person shooters (FPS) or fighting games. Input lag, or input latency, is the delay between the TV or monitor receiving a signal and it being displayed on the screen, or the delay between pressing a controller button and seeing the game react. It is important in gaming because even a 0.01 second reaction difference can determine the difference between a winning and losing move in an FPS game.
Samsung’s QLED TVs have input lag reduced to 15ms (1ms = 0.001 seconds), faster than conventional TVs which have a 20-30ms lag on average. Gamers can fully immerse themselves in games on their QLED TVs with minimal lag. QLED TV’s Auto Game Mode also changes features including response speed for game optimization. When users start a game on a console such as Xbox, the TV automatically recognizes the game and sets the correct game-related settings.
Today’s games are characterized by feature film-quality storylines, cinematography and graphics. Game characters and avatars are becoming more and more lifelike, as are the landscapes around them, often offered in full 360-degree rotation to the player for the most immersive experience possible.
In order to deliver this virtual realism to gamers, game platforms feature the latest in color, contrast and imaging technology, as well as 144Hz refresh rates for high frame-rate gaming. Samsung’s 2018 QLED TVs are the first in the TV industry to use Radeon FreeSync in collaboration with global graphics card maker AMD. Radeon FreeSync puts an end to choppy gameplay and broken frames, bringing smooth gaming and the fluidity of gaming monitors to the big screen.
With the Samsung QLED TV, gamers can experience the cutting-edge display performance they are used to. The QLED TVs feature HDR10+ for crisp and detailed images, as well as 100% color volume for greater image depth in both the day and nighttime, at any brightness level. Samsung’s Anti-reflection technology is another bonus for reducing eye strain and optimizing gameplay.
As well as their favorite console games, users can also play Steam games on their QLED TV. Steam is an online game platform with over 120 million users globally.
In order for QLED users to enjoy the full offering of the Steam platform, all they need to do is head to the Samsung Smart Hub and download the free Steam Link app. No supplementary hardware is necessary for users to be able to access the 15,000+ games offered by Steam on the large-inch QLED screen.
In addition, gamers don’t need to bother with changing the settings on their QLED TV in order to access their favorite console games. Just turn on Auto Game Mode and start your console.
TV screens suffer from performance decline over time, with static images and fixed interfaces leading to image retention and consequently the burn-in issue. With Samsung QLED TVs, however, you no longer need to worry about burn-in.
Rtings, an industry-trusted US TV Reviewer, ran a test for permanent image retention. Different kinds of TVs were kept on for 10 minutes, running a test pattern in a loop. The 2017 QLED TV scored 10/10 for image quality and lack of burn-in, while some screens achieved a mere 5/10.
The durability of the QLED TV screen is due to groundbreaking Quantum Dot technology. Quantum dots are made of inorganic materials, which translates to longer-lasting displays that do not develop the burn-in issue. Displaying the same image or scene for long periods of time will not leave unpleasant ghost images on your QLED TV, enhancing the gaming experience.
“Today’s TVs are like all-round players. More and more, people are enjoying everything on TV such as games, the internet, or smart functions provided by IoT,” said Jongsuk Chu, Senior Vice President of Visual Display Business at Samsung Electronics. “The huge-screen QLED TV with best performance and durability will take the gaming experience to the next level.”
Last year, more than 7 out of 10 TVs sold by Samsung Electronics in the global TV market were ‘smart TVs.’ According to market researcher IHS Markit, the smart TV sales in the global market has exceeded 60 percent, which is a significant leap from 35 percent in 2013, demonstrating the growing demand for easy and convenient smart functions.
Samsung Electronics is leading the change by strengthening the intelligent function of its 2018 QLED TVs with artificial intelligence (AI) and the Internet of Things (IoT). This year’s QLED TVs have dramatically simplified the consumer process for utilizing their TV, from the first set-up to connection with other devices. What’s more, the TVs are now able to share content with, and control, connected devices. These are the result of Samsung’s efforts to innovate the user experience based on the company’s understanding of consumer usage and by incorporating the intelligent assistant Bixby and the SmartThings IoT platform.
The first step, of course, is to get the system set up – by taking just a few simple steps. When setting up for the first time, the TV will automatically connect to a user’s Samsung smartphone, or any mobile device with the SmartThings app installed, and take the user to the set-up step. The network information on the user’s smartphone will be shared, instantly connecting the TV to the Wi-Fi network it is connected to. If the user’s mobile is already linked to a Samsung Account, that information will also be shared with the TV.
Users can also select their apps of choice from their smartphones to enjoy them on Smart TV, and their favorites can also be added to the launcher for easy access. Once set up is complete, users won’t have to worry about it again. Unlike in the past where users had to sign in to each app on the TV, the Effortless Login feature will share and store the login information from a users’ mobile in the 2nd half of this year. The QLED TV allows users to instantly access their favorite social networking or music streaming apps.
According to a survey by business solution provider CSG, 9 out of 10 consumers already own IoT devices, and half of consumers expect IoT to make tasks easier around the home. But, the process of connecting the TV to the peripherals tends to be a hassle for many. The 2018 QLED TV has tackled such challenges once and for all with ‘SmartThings,’ an integrated IoT platform that’s available now.
Users can run the SmartThings app on their smartphone and click ‘Add Device’ to automatically find all the IoT devices in the house that users can connect to. When turned on, SmartThings detects IoT devices at home ranging from traditional home appliances like washing machines and air conditioners to baby-cam, lights, doorbell, and locks. Device information in these connected homes can be viewed at a glance on the TV using the SmartThings Dashboard. The TV immediately becomes the control tower of the living room. Users can, for example, operate the air conditioner after checking the home temperature. Or check the remaining laundry time on the washing machine. Users can also choose to charge or run the PowerBot robot cleaner, and take a look at what ingredients are in the Family Hub refrigerator.
TV has made it easier than ever to review the pictures and videos taken on a smartphone through the Gallery app. Photos and video clips can be uploaded on Samsung Cloud which is connected to the user’s Samsung account, ready to be viewed on a big screen. When you take a picture on your mobile while you are traveling, you can immediately share your memories with your family as soon as a notification is uploaded to the TV. The Gallery app enables users to share the same content on the family hub.
The QLED TV will play the role of a special mediator in the living room to help family members communicate easily on a deeper level.
The QLED TV has been equipped with ‘One Remote Control’ function which can control various peripherals with one remote control. In particular, this year, it supports not only game consoles, OTT (Over the Top) boxes and speakers, but also audio products connected with optical cables.
Bixby Voice also helps make navigating the QLED TV easier. By pressing and holding the remote control voice recognition button, it is possible to search content as well as set up a TV without having to operate it several times. It is a strength that users can find various VOD (Video on Demand) contents such as Amazon, Hulu, and HBO as well as the contents of the set-top box. Users can also get additional information like the soundtrack while watching TV. For example, say “What is this song” to find out what music is playing in the background.
Also, devices connected on SmartThings are voice-controlled. With Bixby, users can have a home IoT experience where everything flows smoothly.
“This year’s smart features of QLED TV have made it easier for users to enjoy the TV experience more conveniently.” said Heeman Lee, Vice President of Samsung Electronics’ Visual Display Business. “We will realize home IoT that changes the lifestyle by various connected devices communicating organically.”
* The SmartThings app is supported on Android 7.0 and above and iOS 10 and above (On Galaxy smartphones, the SmartThings app is available from Android 6.0 and above)
**Specific features and availability may vary by region and market
Samsung Electronics today announced the opening of the artificial intelligence (AI) Center in Russia, which will be located in the “White Square” business center in Moscow. The Center will help the company strengthen its leadership in the field of AI and explore the broad capabilities of user-oriented AI.
The new Center’s main research areas will be computer vision and basic algorithms for AI platform. The Center will also expand the field of AI for key areas such as robotics, intelligent driving assistance, as well as those for future projects by Samsung.
The Samsung AI Center in Russia will be led by Professor Dmitry Vetrov of the Higher School of Economics (HSE). With a Ph.D. degree in Physical and Mathematical Sciences, Vetrov is also the Head of Samsung’s laboratory at the Center for Deep Learning, and Bayesian Methods Research Group at the HSE.
As the leader of the Center, Professor Vetrov will combine scientific work and administrative activities: interacting with Samsung’s divisions and third-party institutions, organizing the Center’s overall work, managing work groups, as well as controlling and participating in scientific research. Professor Viktor Lempitsky of the Skolkovo Institute of Science Technology will also join the team as the leader of the research group.
“Samsung has always been the first to introduce new products and solutions that change the way people interact. Considering Russia as one of the world’s biggest hubs in the field of technical sciences, it is only natural that we chose the country as one of the sites for our new AI Center,” said Ultack Kim, President of Samsung Electronics Headquarters in CIS countries. “A team of the world’s best scientists and IT specialists at the AI Center will help Samsung bring its robotics and AI technologies to a whole new level.”
While there currently are several joint AI laboratories at Moscow State University, the Higher School of Economics and the PDMI RAS, the new Center will establish additional joint labs with Russia’s leading universities. In addition, Samsung will conduct projects with the local universities and different regions of Russia including Kazan, Samara, Rostov-on-Don, Tomsk, and Novosibirsk. Considerations regarding cooperation with Russian start-ups to solve practical problems are also under discussion. Moving forward, these can be developed into full-fledged services related to AI and machine learning, as well as promising developments in the field of applications and components for the company’s products.
“Samsung plans to introduce AI technologies to all of its connected devices and services by 2020. AI will enhance our customer value by offering information and services under any circumstances,” commented Jin Wook Lee, the Head of Samsung R&D Institute Russia, at the opening of the new Center. “This will be a huge step for the world of technology, and will help simplify the implementation of everyday tasks,” he added.
As the popularity of Internet of things (IoT) devices grows, the area of application of AI-based solutions is expanding rapidly. According to the forecast of Samsung experts, IoT devices with built-in AI will generate enormous amounts of data in the coming years. By processing this information, the devices will be able to provide maximum personalization and full compliance with the users’ needs.
“Currently, AI is one of the most promising branches of technology. The opening of the Samsung AI Center in Russia will allow us to contribute to the development of the industry and to apply the achievements of the Russian mathematical school, which has a high level of practical training of research specialists,” said Professor Vetrov.
Earlier in May, Samsung has opened two new AI Centers in Cambridge (UK) and Toronto (Canada). More information about Samsung AI Center in Russia and global AI Centers is available at Samsung Research website.
The world is a computer, filled with an incredible amount of data. By 2020, the average person will generate 1.5GB of data a day, a smart home 50GB and a smart city, a whopping 250 petabytes of data per day. This data presents an enormous opportunity for developers — giving them a seat of power, while also giving them tremendous responsibility. That’s why this morning at Build, we don’t take our jobs lightly in helping to equip these developers with the tools and guidance to change the world. On stage in Seattle, Microsoft CEO Satya Nadella is describing this new world view, fueled by AI that can power better health care, relieve challenges around basic human needs and create a society that’s more inclusive and accessible.
Helping create a better, safer, more just world is a responsibility we take seriously at Microsoft. We’ve always been committed to the ethical creation and use of technology. As AI increasingly becomes part of our lives, Microsoft’s commitment to advancing human good has never been stronger. Today, we’re announcing AI for Accessibility, a new $25 million, five-year program aimed at harnessing the power of AI to amplify human capability for the more than one billion people around the world with disabilities. AI for Accessibility is a call to action for developers, NGOs, academics, researchers and inventors to accelerate their work for people with disabilities, focusing on three areas: employment, human connection and modern life. It includes grants, technology and AI expertise to accelerate the development of accessible and intelligent AI solutions and builds on recent advancements in Azure Cognitive Services to help developers create intelligent apps that can empower people with hearing, vision and other disabilities. Real-time speech-to-text transcription, visual recognition services and predictive text functionality that suggests words as people type are just a few examples. We’ve seen this impact through the launch of Seeing AI and alt-text which empowers people who are blind or low vision; as well as Helpicto, which helps people with autism.
If AI is the heart of how we can advance society, the intelligent cloud and the intelligent edge are the backbone. In the next 10 years, billions of everyday devices will be connected — smart devices that can see, listen, reason, predict and more, without a 24/7 dependence on the cloud. This is the intelligent edge, and it is the interface between the computer and the real world. The edge takes AI and cloud together to collect and make sense of new information, especially in scenarios that are too dangerous for humans or require new approaches to solve, whether they be on the factory floor or in the operating room.
Today we’re giving developers the tools and guidance to build these possibilities. For example, we’re making it easier to build apps at the edge by open sourcing the Azure IoT Edge Runtime, allowing customers to modify the runtime and customize applications at the edge. We’re giving developers Custom Vision — the first Azure Cognitive Service available for the edge — to build applications that use powerful AI algorithms that interpret, listen, speak and see for edge devices. And we are partnering across both DJI and Qualcomm. Microsoft and DJI, the world’s largest drone company, will collaborate to develop commercial drone solutions so that developers in key vertical segments such as agriculture, construction and public safety can build life-changing solutions, like applications that can help farmers produce more crops. With Qualcomm Technologies Inc., we announced a joint effort to create a vision AI dev kit running Azure IoT Edge, for camera-based IoT solutions. The camera can power advanced Azure services like machine learning and cognitive services that can be downloaded from Azure and run locally on the edge. Other advancements include a preview of Project Brainwave, an architecture for deep neural net processing, that is now available on Azure and on the edge. Project Brainwave makes Azure the fastest cloud to run real-time AI today.
We are also releasing new Azure Cognitive Services updates such as a unified Speech service that makes it easier for developers to add speech recognition, text-to-speech, customized voice models and translation to their applications. In addition, we’re making Azure the best place to develop conversational AI experiences integrated with any agent. New updates to Bot Framework, combined with our new Cognitive Services updates, will power the next generation of conversational bots, enabling richer dialogs and full personality and voice customization to match a company’s brand identity.
It was eight years ago when we shipped Kinect, which was the first AI device with speech, gaze and vision. We then took that technology forward with Microsoft HoloLens. We’ve seen developers build transformative solutions across a multitude of industries, from security to manufacturing to health care and more. As sensor technology has evolved, we see incredible possibilities for combining these sensors with the power of Azure AI services such as machine learning, Cognitive Services and IoT Edge.
Today we are excited to announce a new initiative, Project Kinect for Azure — a package of sensors from Microsoft that contains our unmatched time-of-flight depth camera, with onboard compute, in a small, power-efficient form factor — designed for AI on the edge. Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence.
Similarly, our Speech Devices software development kit announced today delivers superior audio processing from multi-channel sources for more accurate speech recognition, including noise cancellation, far-field voice and more. With this SDK, developers can build for a variety of voice-enabled scenarios like drive-thru ordering systems, in-car or in-home assistants, smart speakers and other digital assistants.
This new age of technology is also fueled by mixed reality, which is opening up new possibilities in the workplace. Today we announced two new apps that will help empower firstline workers, the first workers to interface with customers and triage problems: Microsoft Remote Assist and Microsoft Layout. Microsoft Remote Assist enables remote collaboration via hands-free video calling, letting firstline workers share what they see with any expert on Microsoft Teams, while staying hands on to solve problems and complete tasks together. In a similar vein, Microsoft Layout lets workers design spaces in context with mixed reality, using 3D models for creating room layouts with holograms.
Whether creating a more inclusive and accessible world, solving problems that plague humanity or helping improve the way we work and live, developers are playing a leading role. As new ideas and solutions with AI and intelligent edge emerge, Microsoft will continue to advocate for developers and give them the tools and cloud services that make it possible to build these new solutions to solve real problems. From the top down, we are a developer-led company that continues to invest in coders and give them free rein to solve problems.
Learn more about how we’re empowering developers to build for this future today using Azure and M365, via blog posts from Executive Vice President of Cloud + AI Scott Guthrie and Corporate Vice President of Windows Joe Belfiore.
The post Advancing the future of society with AI and the intelligent edge appeared first on The Official Microsoft Blog.
Innovations showcasing Industrial IoT, AI, cobotics, digital twins and mixed reality will be on display at Microsoft’s booth at the annual Hannover Messe industrial fair.
This week at the world’s largest industrial fair, I am honored to once again host nearly 30 customers and partners in Microsoft’s booth at the Digital Factory Hall at Hannover Messe. The progress manufacturers have made this past year is tremendous. Smart factories have already seen an average of 17 to 20 percent increased overall productivity. They have created higher-quality products at lower costs. They are building entirely new business models and service offerings. But our customers’ aspirations are bigger and bolder. For example, by the year 2050, the demand for food is expected to outpace production by more than 70 percent. Agricultural stability is being threatened by receding levels of fresh water, decreasing availability of arable land and global warming, causing issues like toxins in our food supply. The workforce will continue to modernize and shift.
The next step is to use our Internet of Things (IoT)-enabled levels of intelligence to optimize the entire manufacturing process and solve for these challenges. There are three distinct themes that stand out at this year’s event:
Increased productivity and safety
What we have built with customers is driving tangible results. Today’s new data-driven manufacturing capabilities are not only lowering costs and reducing waste, but they are also keeping people safer and mitigating our impact on the planet. For example, Swiss technology firm Bühler AG, a leader in food processing systems, has worked closely with Microsoft to develop LumoVision, a revolutionary optical sorting system that not only significantly improves current food cleaning practices, but can eliminate nearly 90 percent of contaminated grain compared to 50 percent for conventional sorting machines. Empowered by the Microsoft cloud and IoT technology, this solution builds on Bühler’s advanced process expertise, and as a result, LumoVision is faster and more precise than other grain-sorting technologies.
We will also demonstrate how Microsoft HoloLens has become an invaluable tool in taking digital twin technology to the next level. Thanks to the explosive expansion of Industrial IoT, digital twins have become cost-effective to implement and are helping companies head off problems before they even occur. Our customers are using digital twins to prevent downtime, improve equipment performance, develop new service opportunities and even plan for the future by generating simulations and visualizing their processes in mixed reality. For instance, Schneider Electric, whose industrial software business has recently combined with AVEVA, is leading the evolution of what the industry refers to as a “process digital twin.” Schneider and AVEVA are leveraging HoloLens to optimize ItalPresse’s entire manufacturing process by creating virtual prototypes even before a plant or manufacturing asset is built, which can provide significant cost and efficiency savings. Schneider will also showcase its recently announced traceability tool for the food and beverage industry, combining its knowledge of the food and beverage industry with Microsoft’s expertise in blockchain, given the growing complexities with tracing food products.
Additionally, one of our leading robotics automation partners, ICONICS, will demonstrate how a technician wearing a HoloLens can work alongside a factory robot while receiving instructions and key factory performance indicators displayed over his field of vision via HoloLens.
Microsoft’s customers and partners are creating new value chains and services that simply did not exist five years ago. Tech innovations have allowed them to establish digital SWOT (strengths, weaknesses, opportunities and threats) teams, open new “one-size-fits-one” plants, and monetize things like predictive maintenance, 3D modeling and smart operations. For example, just last week thyssenkrupp announced it is expanding MAX, the company’s IoT-based predictive service solution for elevators, to Latin America. thyssenkrupp is confident that MAX can reduce elevator downtime by up to 50 percent, making its predictive models second to none in the global elevator industry.
ABB, a global leader in industrial technology, is leveraging Microsoft’s Azure cloud technologies for its ABB AbilityTM platform, one of the largest Industrial IoT platforms in the industry. ABB will showcase its ABB Ability Ellipse TM platform. Leveraging AI from Microsoft, it can empower organizations to optimize Enterprise Asset Management and automatically detect anomalies to minimize maintenance costs across its customers’ install base.
Bayer’s Environmental Science Business Unit is digitally transforming a decades-old pest control practice for trapping rodents with a smarter digital mousetrap that provides remote monitoring built on top of the Azure IoT platform. The solution collects information from sensors installed within each trap and immediately alerts pest management professionals when rodents are present, so they can head off infestations and increase the effectiveness of pest control programs.
By 2020, IDC predicts that 60 percent of plant floor workers will work alongside assistance technologies that enable automation, such as robotics, 3D printing, AI and mixed reality. Several leading manufacturing and robotics companies have already created new and evolved “lean” processes that leverage these capabilities to help service technicians optimize tasks and lower waste and inefficiencies, while providing better customer service. For example, Toyota Material Handling Europe is planning its 10-year vision for the factory of the future by evolving its traditional lean processes. Its goal is to find more efficient ways to distribute intelligent logic across the factory and its robotic systems. Using AI capabilities like Microsoft AirSim and mixed reality, the company can train autonomous pallet drones to recognize patterns, automate processes and learn the flow on the plant floor safely alongside humans. This innovative solution would drastically reduce disruptions to warehouse operations, one of the key roadblocks to deploying autonomous systems. Toyota Material Handling Europe has also worked with Microsoft to develop T-Stream, a brand new, all-in-one solution. Built on Microsoft’s Azure cloud, it runs on Windows and utilizes Bing Maps and GPS systems to provide technicians with improved, proactive services that can carry out maintenance for customers before breakdowns occur.
Investing in Industrial IoT
As we partner with our manufacturing customers to empower their digital transformations, we continue to invest specifically in security and Industrial IoT innovations that meet their needs, where they need them most. This year we have a new set of robust Azure IoT features we will demonstrate at our booth, including:
Why customers and partners bet on Microsoft
Our cloud platform is now available in more than 42 regions across the globe and meets a broad set of international standards and compliance, including European Union General Data Protection Regulation (GDPR) and intellectual property (IP) requirements, which are critical as the May 25 GDPR deadline looms. Additionally, we have one of the largest, if not the largest, partner ecosystems co-selling solutions with us at Hannover Messe. Partners are not only an important part of a complex Industrial IoT ecosystem, they are critical to how we do business. Today, Siemens announced its IoT ecosystem Mindsphere is now available on the Microsoft Azure cloud platform, giving our joint customers the ability to make their IoT applications available on our cloud. The preview of MindSphere for Microsoft Azure is available for select customers and partners, and will follow a continuous development and deployment model. MindSphere for Microsoft Azure is planned to be generally available in the calendar 4th quarter of 2018. More than 90 percent of our revenues come through our 8,500 trusted partners across the globe. Every major Industrial IoT provider, including ABB, Accenture/Avanade, COPA-DATA, EY, GE, ICONICS, Kapsch, OSIsoft, PTC, Rockwell Automation and Schneider Electric, have joined forces with Microsoft to integrate and offer their manufacturing services and solutions on top of our global Azure cloud.
There are many more reasons the industry is choosing to partner with Microsoft, but I invite you to come see the innovations first-hand in our booth in the Digital Factory: Hall 7, Stand C40 this week in Hannover.
Visit the the Digital Difference site for more information about Microsoft and its customers’ and partners’ presence at Hannover Messe 2018.
The post Industrial customers and partners bet on Microsoft, from cloud to edge devices appeared first on The Official Microsoft Blog.
We are living in a world where almost everything is becoming connected, whether it’s the electrical grid, phone system, our cars, or the appliances that heat our home or chill our food. As this Internet of Things (IoT) continues to proliferate, so does the threat of debilitating cyber-attacks, like last year’s devastating ransomware attacks that damaged, destroyed and disrupted systems around the world. And these attacks are only growing more sophisticated – and commonplace.
We recognize that we and others in the tech sector have the first responsibility to address these issues. After all, we build the products. We operate the platform. We unfortunately are the battlefield in many ways. We are the first responders. At Microsoft and at many of our peers, our security professionals are the ones that answer the call, scramble onto airplanes, and stay by our customer’s side until their issues are resolved. Trust is the underpinning of our relationship with our customers, and we recognize that we must earn and maintain that trust every single day.
That’s why this year at RSA in San Francisco, Microsoft is announcing new offerings to take security more squarely to where it needs to go and where it has not effectively gone before – the edge. Today we’re unveiling a series of new services and features that will better harden not only our intelligent cloud but also the billions of connected devices that live on its edge. And we’re supporting these advances with new offerings that will making security easier for our customers to manage.
Azure Sphere: Extending security to the Internet of Things
Over the past 15 years, we’ve repeatedly taken steps to strengthen security protection not only for Windows and Office software, but also to harden our Xbox chipsets. We’re now combining this expertise and these advances to secure at the silicon level the billions of connected devices that will sit on the edge of the world’s computing network.
Applying new advances by our security researchers, we are introducing security protection for the next generation of cloud and edge devices powered by microcontroller units (MCUs). This growing class of cloud-connected devices – 9 billion of which ship every year – run tiny MCU chips that will power everything from kitchen appliances and toys to industrial equipment on factory floors. This next wave of connected devices is increasingly intelligent and connected. They will improve daily life in countless ways, but if they’re not secure, they will make people, communities and countries vulnerable to attack in more ways than ever before.
Today we’re announcing Azure Sphere, the industry’s first holistic solution for securing MCU-based devices from the silicon to the cloud. This solution brings together three critical pieces and advances:
This combined approach to Azure Sphere brings together the best of hardware, software and services innovation. It is open to any MCU chip manufacturer, open to additional software innovation by the open source community and open to work with any cloud. In short, it represents a critical new step for Microsoft by integrating innovation across every aspect of technology and by working with every part of the technology ecosystem, including our competitors. We believe this holistic solution will bring to IoT devices better security, resilience and developer agility than anything on the market today.
Simplifying security through new cloud offerings
In the past, some enterprises were hesitant to move to the cloud because of perceived security risks. Today, customers appreciate that the cloud is almost certainly more secure than on-premise environments. The result is that customers trust the security of their enterprise to us, so they can focus on their core business.
Over the past year we’ve focused on strengthening Microsoft 365 so it not only helps our customers be more collaborative and productive, but also makes it easier to secure IT infrastructure against a growing range of threats. Because Microsoft 365 is a cloud service, we’re able to rapidly develop and deploy new security innovations based on learnings and insights coming from our Microsoft Intelligent Security Graph. Today we’re announcing four cloud-based advances that will enable customers to use Microsoft 365 to strengthen further their security protection:
Security is a shared responsibility
All of the advances we’re announcing today reflect another essential fact of life. Security has become a shared responsibility. We believe that Microsoft has an important responsibility and is in a unique position to help address the world’s security issues and contribute to long-term solutions. But no one has anything close to a monopoly on good security ideas or expertise. More than ever, the continuing rise in security threats requires that we work together in new ways across the tech sector and with customers and governments.
That’s why we’re committed not only to greater security collaboration at the technology level, but also to advancing the public security policies the world needs.
RSA offers the entire industry an important opportunity each year to talk about the challenges of cybersecurity. We need more of these conversations. Even more, we need action. That’s why we continue to advocate around the world to interpret and build on existing international laws and ultimately establish a Digital Geneva Convention to protect civilians against cyber-attacks. And it’s why just last week we launched Microsoft’s Defending Democracy Program, based on a new team at Microsoft dedicated to working with governments, technology companies, academia and civil society to address cyber-related threats and interference in democratic processes.
Today’s big security challenges require bold ideas. Whether it’s strengthening our products, using data to better identify and disrupt threats, or working with customers on their own cyber-resilience, we are committed to delivering world-class security to customers and partners. And we are committed to working across the tech industry and public sector to improve our shared defense of the technology infrastructure on which the world depends.
The post Using intelligence to advance security from the edge to the cloud appeared first on The Official Microsoft Blog.
Posted by Dave Smith, Developer Advocate for IoT
Earlier this year at CES, we showcased consumer products powered by Android Things from partners like Lenovo, LG, JBL, iHome, and Sony. We are excited to see Android Things enable the wider developer ecosystem as well. Today we are announcing the final preview release of Android Things, Developer Preview 8, before the upcoming stable release.
Feature complete SDK
Developer Preview 8 represents the final API surface exposed in the Android Things support library for the upcoming stable release. There will be no more breaking API changes before the stable v1.0 release of the SDK. For details on all the API changes included in DP8, see the release notes. Refer to the updated SDK reference to review the classes and methods in the final SDK.
This release also brings new features in the Android Things developer console to make building and managing production devices easier. Here are some notable updates:
Production-focused console enhancements
With an eye towards building and shipping production devices with the upcoming LTS release, we have made several updates to the Android Things developer console:
The new app library enables you to manage APKs more easily without the need to package them together in a separate zipped bundle. Track individual versions, review permissions, and share your apps with other console users. See the app library documentation for more details.
On mobile devices, apps request permissions at runtime and the end user grants them. In earlier previews, Android Things granted these same permissions automatically to apps on device boot. Beginning in DP8, these permissions are granted using a new interface in the developer console, giving developers more control of the permissions used by the apps on their device.
This change does not affect development, as Android Studio grants all permissions by default. Developers using the command line can append the
-g flag to the
adb install command to get the same behavior. To test how apps on your device behave with certain permissions revoked, use the
$ adb shell pm [grant|revoke] <permission-name> ...
App launch behavior
Embedded devices need to launch their primary application automatically after the device boots, and relaunch it if the app terminates unexpectedly. In earlier previews, the main app on the device could listen for a custom
IOT_LAUNCHER intent to enable this behavior. Beginning in DP8, this category is replaced by the standard CATEGORY_HOME intent.
<activity android:name=".HomeActivity"> ... <!-- Launch activity automatically on boot, relaunch on termination. --> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.HOME"/> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </activity>
Apps that contain an
IOT_LAUNCHER intent filter will no longer be triggered on boot. Update your apps to use CATEGORY_HOME instead.
Thanks to all of you in the developer community for sharing your feedback with us throughout developer preview. Join Google's IoT Developers Community on Google+ to let us know what you're building with Android Things and how we can improve the platform in future releases to help you build connected devices at scale!
Samsung Electronics, Worldwide Olympic Partner in the Wireless Communications and Computing Equipment category, is bringing technological innovations to the Olympic Winter Games PyeongChang 2018 through interactive experiences at the new Samsung Olympic Showcases. Samsung will encourage fans and athletes to “Do What You Can’t” through fun, immersive experiences across multiple Samsung Olympic Showcases.
Throughout the Olympic Winter Games, a total of nine Samsung Olympic Showcases, featuring a mix of cultural, technological and immersive fan experiences, will be in PyeongChang and Gangneung, including Olympic Parks and the Olympic Villages, the Main Press Center, and four at the Incheon International Airport. Samsung Olympic Showcases @ PyeongChang Olympic Plaza and @ Gangneung Olympic Park opening on February 9th will provide visitors with engaging experiences incorporating Samsung’s legacy of breakthrough innovation and interactive activities powered by Samsung technology.
Athletes and fans will first experience the Samsung brand history and heritage in engineering, design, and craftsmanship, and then the partnership history with the Olympic Games. Then through interactive experiences, visitors will be able to feel the exhilaration and escape into real-world thrills in winter sports through VR including snowboarding and skeleton, as well as participate in alpine and cross-country skiing contests that will challenge fans’ fitness capacity. For the first time, visitors will be able to experience the ‘Mission to Space VR: A Moon for All Mankind’ created by Samsung. Visitors will experience a full space mission including mission briefing, trying on the training suits and helmets, and the truly immersive experience in the Moon rig where they will feel lunar gravity with every step.
“For two decades as a Worldwide Olympic Partner, Samsung has connected fans, visitors and athletes from around the world with our latest technological innovations, which have now evolved to also include immersive experiences,” said Younghee Lee, CMO and Executive Vice President, Samsung Electronics. “We’re delighted to share Samsung’s latest mobile technologies and products so fans and athletes can enjoy new, unique experiences while at the Olympic Winter Games.”
“Over the course of our longstanding partnership, Samsung has brought new experiences and technology advancements to each Olympic Games,” said IOC President Thomas Bach. “Samsung helps to make each and every one an exciting and meaningful experience for all.”
Visitors will experience and engage with Samsung’s products and Galaxy brand through a variety of activities through a comfortable, interactive environment, including:
Deeplocal is a Pittsburgh-based innovation studio that makes inventions as marketing to help the world's most loved brands tell their stories. The team at Deeplocal built several fun and engaging robotics projects using Android Things. Leveraging the developer ecosystem surrounding the Android platform and the compute power of Android Things hardware, they were able to quickly and easily create robots powered by computer vision and machine learning.
DrawBot is a DIY drawing robot that transforms your selfies into physical works of art.
"The Android Things platform helped us move quickly from an idea, to prototype, to final product. Switching from phone apps to embedded code was easy in Android Studio, and we were able to pull in OpenCV modules, motor drivers, and other libraries as needed. The final version of our prototype was created two weeks after unboxing our first Android Things developer kit."
- Brian Bourgeois, Producer, Deeplocal
Want to build your own DrawBot? See the Hackster.io project for all the source code, schematics, and 3D models.
A robotic hand that learns and reacts to hand gestures, HandBot visually recognizes gestures and applies machine learning.
"The Android Things platform made integration work for Handbot a breeze. Using TensorFlow, we were able to train a neural network to recognize hand gestures. Once this was created, we were able to use Android Things drivers to implement games in easy-to-read Android code. In a matter of weeks, we went from a fresh developer kit to competing against a robot hand in Rock, Paper, Scissors."
- Mike Derrick, Software Engineer, Deeplocal
Want to build your own HandBot? See the Hackster.io project for all the source code, schematics, and 3D models.
Visit the Google Hackster community to explore more inspiring ideas just like these, and join Google's IoT Developers Community on Google+ to get the latest platform updates, ask questions, and discuss ideas.
In today’s Microsoft second quarter earnings call, CEO Satya Nadella showcased how customers are using our technology to create digital business solutions. The 56 percent year-over-year growth in commercial cloud revenue — with broad-based growth across geographic markets and industry segments — is fueled by customer and partner success.
Just this week, we announced news with Publicis Groupe, Columbia Sportswear and PTC. Communications and advertising giant Publicis Groupe is building its new AI-powered platform, Marcel, on Microsoft Azure and Office 365 to empower its 80,000 employees worldwide. Columbia Sportswear, innovator in active outdoor apparel, announced its choice of Dynamics 365 and Azure to its enhance worldwide consumer experience. Plus, PTC, a leader in product lifecycle management solutions that include Internet of Things (IoT), augmented reality and 3D computer-aided design for the industrial sector, has selected Azure as its preferred cloud platform.
Below are more customer highlights from this quarter.
In the industrial sector, United Technologies Corp. (UTC) builds and services millions of products, from elevators in some of the world’s tallest buildings to aerospace equipment. UTC is using Dynamics 365 and Azure to help its massive field organization better predict and respond to customer needs. Chevron announced Azure as its primary cloud for intelligent, digitized oil fields in order to increase revenues, reduce costs and improve the safety and reliability of operations.
Consumer product companies are innovating with the Microsoft cloud, too. Kohler has built a legacy of blending home comfort and style through innovation. This year marks Kohler’s entrance into the connected home market with a new line of kitchen and bathroom products, Kohler Konnect. For example, with Azure IoT, Kohler Konnect products respond to voice and in-app commands to manage bath temperature or start a shower.
In real estate, CBRE entered the smart building market with a customizable, connected workplace solution to give property investors and occupants a single, seamless access point to building amenities and services. Powered by Azure IoT, the CBRE 360 mobile apps will allow users to locate colleagues and navigate the workplace, reserve workspaces, and access food and beverage services, as well as basic building and high-end concierge services.
In retail, national grocery chain Kroger is leveraging Azure to power its EDGE (Enhanced Display for Grocery Environment) solution — a grocery-store shelf with digital screen displays showing prices, nutritional information and more. The system manages high volumes of data, better connects store management and customers and ensures stock does not run low. Home-improvement company Lowe’s worked with Fellow Robots to deploy autonomous LoweBots to assist with inventory data and shelf intelligence. As the LoweBot scans inventory on the shelves, Azure helps Lowe’s keep constant tabs on inventory and frees store employees to assist customers. Merkal Calzados, Spain’s leading retailer of affordable footwear, has chosen Dynamics 365 for retail, finance, operations, and customer service to transform how it selects and sources product, improve marketing and accelerate omnichannel growth.
One of the world’s largest casual dining companies, Bloomin’ Brands, Inc., chose Azure to help its digital transformation across approximately 97,000 team members and almost 1,500 restaurants. The parent company of Outback Steakhouse, Bonefish Grill, Carrabba’s and Fleming’s Steakhouse, Bloomin’ Brands is using Azure advanced analytics, machine learning and Power BI to enable guest engagement and convenience through mobile apps, websites and e-commerce, including the customer loyalty program.
In the healthcare sector, UMB Healthcare Services, a division of UMB Bank, continues to improve its health savings account (HSA) solution, powered on Azure, by creating a seamless customer experience for 1.2 million HSA accounts. For example, ReceiptVault allows HSA owners to safely manage their health care receipts in one place, which is an important tax requirement.
Aurora Health Care operates 15 hospitals, more than 150 clinics and 70 pharmacies throughout eastern Wisconsin and northern Illinois, Premera Blue Cross is the largest health plan in the Pacific Northwest, and UPMC is one of the largest integrated health care delivery networks in the U.S. These partners are working with us on a new AI-powered health bot project, currently in private preview. Powered by Cognitive Services and enriched with medical content, the bots give customers self-service access to their health-related questions and information.
In the government space, Kansas City’s Azure-powered solution from Opti improves water quality and saves local citizens and companies money. The solution uses a wide range of data to control rainwater entry into the sewer system and could reduce the overall cost of the program by almost a billion dollars over 25 years.
In the world of payment technology, Mastercard selected Microsoft 365 to support a modern workplace that empowers its employees’ teamwork and innovation. One of the largest companies in the payments space, Mastercard connects consumers, financial institutions, merchants and businesses in more than 210 countries and territories to achieve their vision of a world beyond cash. The company is also leveraging Azure for apps, including Masterpass, a digital mobile wallet and rewards program application.
In the auto industry, Volkswagen Group Digital is piloting new forms of workplaces. The company recently deployed Surface Books, Surface Pros, Surface Studios and Surface Hubs in its 10X service design lab and Future Centers, and to run its collaboration application, DEON. DB Schenker, a division of Deutsche Bahn AG, focuses on logistics across air, land, sea freight as well as contracts logistics. The company turned to Windows 10 to help safeguard its business with intelligent, built-in security and to empower the productivity of its global, mobile workforce.
Across the globe, industry leaders are choosing Microsoft to power their business strategies and new products, or support culture change. I am constantly inspired by our customers’ and partners’ digital ambitions and innovation, and I am eager to continue partnering with them on their digital journey.
Samsung Electronics, the world leader in advanced memory technology, today announced that it has launched an 800-gigabyte (GB) solid state storage drive—the SZ985 Z-SSD, for the most advanced enterprise applications including supercomputing for AI analysis.
Developed in 2017, the new 800GB Z-SSD provides the most efficient storage solution for high-speed cache data and log data processing, as well as other enterprise storage applications that are being designed to meet rapidly growing demand within the AI, big data and IoT markets.
“With our leading-edge 800GB Z-SSD, we expect to contribute significantly to market introductions of next-generation supercomputing systems in the near future, enabling improved IT investment efficiency and exceptional performance,” said Jinman Han, senior vice president, Memory Product Planning & Application Engineering at Samsung Electronics. “We will continue to develop next-generation Z-SSDs with higher density and greater product competitiveness, in order to lead the industry in accelerating growth of the premium SSD* market.”
The new single port, four-lane Z-SSD features Z-NAND chips that provide 10 times higher cell read performance than 3-bit V-NAND chips, along with 1.5GB LPDDR4 DRAM and a high performance controller. Armed with some of the industry’s most advanced components, the 800GB Z-SSD features 1.7 times faster random read performance at 750K IOPS, and five times less write latency – at 16 microseconds, compared to an NVMe SSD PM963, which is based on 3-bit V-NAND chips. The Z-SSD also delivers a random write speed of up to170K IOPS.
Due to its high reliability, the 800GB Z-SSD guarantees up to 30 drive writes per day (DWPD) for five years, or a total of 42 petabytes. That translates into storing a total of about 8.4 million 5GB-equivalent full-HD movies during a five-year period. The reliability of the new Z-SSD is further underscored by a mean time between failures (MTBF) of two million hours.
Samsung will introduce its new Z-SSD in 800GB and 240GB versions, as well as related technologies at ISSCC 2018 (International Solid-State Circuits Conference), which will be held February 11-15 in San Francisco.
Note: All brand, product, service names and logos are trademarks and/or registered trademarks of their respective owners and are hereby recognized and acknowledged. Z-SSD is a trademark of Samsung Electronics Co., Ltd.
* Editor’s Note: The premium SSD means an SSD with IOPs exceeding 550K for random reads, and latency lower than 20us.
In a few years time, users may not have to figure out how to operate different devices individually or make a choice between services. Instead, the new world of connected devices and services based on artificial intelligence (AI) will be able to recommend and perform, on their own, integrated and seamless functions for users in and across environments from the home to office to car.
For example, in the home, when a user wakes up in the morning on a rainy day, the home lights will gradually brighten, while music fit for a rainy day is selected and played in the background. A cup of coffee will be prepared as soon as the user says “coffee” while stepping into the kitchen and the refrigerator will also recommend meal ideas for the day, asking the user whether he or she would like to buy ingredients online.
In the Information and Communications Technology (ICT) industry, Samsung Electronics is uniquely positioned to bring this world of connected AI services to life, based on the almost half a billion connected devices the company sells every year. In fact, given the typical lifecycle of a device, there are more than a billion Samsung devices actively used around the world at any given time.
Samsung’s device portfolio also is the industry’s broadest, and includes mobile devices such as smartphones, tablets and wearable devices, office devices such as PCs, signages and Samsung Flip, devices for the home such as Samsung Smart TVs, Family Hub and FlexWash and FlexDry, and much more.
At this year’s CES, Samsung highlighted its latest innovations in its vision to drive the Internet of Things (IoT) supported by AI. Samsung Smart TVs now integrated with Bixby, are able to play music and shows personalized for users, as well as show who is at the front door or what is inside the refrigerator. The Family Hub refrigerator, also integrated with AI, recognizes the voices of different family members and provides each of them with a personalized daily schedule.
Moving forward, Samsung will continue to remain focused on holistically integrating AI into a connected setting, such as the home or the office, in contrast to other players primarily pursuing implementation of AI on a few devices and services. In the following months, Samsung will integrate not only Samsung devices, but also IoT devices and sensors developed by external partners into the SmartThings eco-system, allowing a single SmartThings app to control everything. Furthermore, Samsung also plans to integrate AI into all its connected devices by 2020.
In the coming years, many IoT devices with AI support will generate a vast array of usage patterns and scenarios. How AI-enabled devices learn and analyze complex usage patterns and provide consumers with the most optimized options will be critical to the success of AI technology for the near future. In other words, the success of AI will boil down to how well the devices understand the users.
Therefore, Samsung’s perspective on AI is to build an eco-system that is user-centric rather than device-centric. To pursue that goal, we will start by building an AI platform under a common architecture that will not only scale quickly, but also provide the deepest understanding of usage context and behaviors, making AI more relevant and useful.
For the past decades, Samsung successfully introduced products and innovations by researching the lifestyle and behavior of global consumers. Paying respect to our heritage of user-centric product development, Samsung will begin an exciting journey open to boundless possibilities in new user experiences by integrating AI into the open IoT ecosystem it is currently developing. This journey will certainly be fascinating for us here at Samsung, but even more so for consumers, as Samsung takes major steps forward to bring consumers’ hopes and expectations to life.
Samsung Electronics announced today that it has become a Platinum Member of the Linux Foundation Networking Fund.
The Linux Foundation Networking Fund (LFN) is a new entity that integrates the governance of participating projects in order to enhance operational excellence, simplify member engagement, and increase collaboration across open source networking projects and standards bodies.
“I am confident that the Linux Foundation Networking Project will serve as a huge source of innovation for next generation networks, including 5G,” said Woojune Kim, Senior Vice President of North America Business, Networks Business at Samsung Electronics. “As a member of the project, Samsung will work with the open source community to ensure that new carrier-grade solutions based on new technologies such as cloud data centers are available.”
“We are delighted to welcome Samsung to the LFN,” said Arpit Joshipura, General Manager of Networking and Orchestration at the Linux Foundation. “We are building a strong community of ecosystem partners all committed to operational excellence and collaboration that will ultimately enable faster integration and upstreaming across open source and open standards. I look forward to working closely with Samsung and the rest of the community in this endeavor.”
The race for next generation communications is on. As a result, the need for a flexible and software-centric network platform that can provide a diverse range of services based on 5G and IoT is growing by the day. Open source stands out as a key technology in the industry because it can satisfy this demand by removing the difficulties that come with hardware-centric legacy network equipment. What open source will ultimately enable is flexible network management, enabled by optimized resource allocation and network slicing.
In line with the telecommunications industry’s growing awareness of open source, Samsung has been actively participating in a range of open source communities such as Open Platform for NFV (OPNFV), Open Network Automation Platform (ONAP), Open Network Operating System (ONOS) and OpenStack to come up with virtualization solutions that meet all the criteria for commercial use. These measures display Samsung’s effort toward artificial intelligence-based automated network platforms that will deliver simple, flexible and cost-effective network management.
To learn more about the Linux Foundation Networking Fund, please visit: https://www.linuxfoundation.org/.
For those of us in the tech and related industries, the start of each new year represents a period for our own personal moments of reflection, but also as professionals and those passionate about technology to share and envision together the latest in technology innovation at the annual Consumer Electronics Show (CES) in Las Vegas.
For the past several years, the Internet of Things (IoT) has remained the industry’s biggest buzzword for its promise of delivering seamless connectivity across the multiple devices and technologies that we interact with in our daily lives – from our smartphones to smart TVs to Family Hub refrigerator to even our cars. Yet while the vision has been alluring, with IoT and related technologies still maturing, the promise has always remained ‘a few years off’.
At CES 2018, our aim, at Samsung, will be to show you the work we’ve done to change that, and to begin delivering on the promise of a connected world, today.
Across the many and various devices consumers interact with in their places of business, homes, and while on the go, each is typically encumbered by a different setup process, password to remember, and interface to learn and manage, which has made the connected experience anything but easy.
At Samsung, we decided to do something about this, and at this year’s CES, we’ll be sharing our breakthroughs to make the IoT experience easy and intuitive for you, which has involved delivering seamless connectivity between any device through a single experience, backed by an integrated ecosystem to manage all the devices in close, fluid synchronization.
The connected experience we will be introducing also is powered by a personalized intelligence interface, with the aim of ensuring you are able to tap into all the potential and power this connectedness provides, as easily as if you were flipping a switch. Understanding that innovation on this scale and delivering on a truly connected world can’t happen in a silo, we’ve also worked closely with industry partners as part of the largest IoT standardization body, the Open Connectivity Foundation (OCF), which I will also be sharing about further at the show.
Until now, the promise of a world of connected devices has remained too fragmented, and as result too complex and difficult for consumers to navigate or practically take advantage of. However, a connected world, should be anything but. It should work for you, and make life easier for you. I couldn’t be more excited to share with you further on how Samsung is bringing this vision to reality now in just a few days time at CES 2018.
At CES 2018, Samsung Electronics will share its vision and strategy for an intelligent and seamless Internet of Things (IoT) experience for all consumers, showcased by the latest innovations Samsung has made to start bringing this experience to reality, now.
Samsung’s press conference at this year’s CES will be held on Monday, January 8, 2018 from 14:00 to 14:45 (PST) at the Mandalay Bay Convention Center, Level 2, Ballrooms G+H. The press conference is open to all credentialed media with a CES badge.
For all others interested, Samsung will also be streaming the press conference live on the following channels:
For the latest information and announcements regarding Samsung at CES 2018, please visit Samsung Newsroom at: http://news.samsung.com.
Today we’re kicking off Connect(); 2017, one of my favorite annual Microsoft developer events, where over three days we get to host approximately 150 livestreamed and interactive sessions for developers everywhere — no matter the tools they use or the platforms they prefer. Today at Connect(); 2017 I’m excited to share news that will help developers build for the intelligent cloud and the intelligent edge. It’s never been a better time to be a developer, as developers are at the forefront of building the apps driving monumental change across organizations and entire industries. At Microsoft, we’re laser-focused on delivering tools and services that make developers more productive, helping developers create in the open, and putting AI into the hands of every developer so they unleash the power of data and reimagine possibilities that will improve our world.
Any developer, any application, any platform
In previous years at Connect(); we announced the open-sourcing of .NET Core. Last year we announced Microsoft joining the Linux foundation and shared SQL Server on Linux. This year we’re continuing to deliver on our commitment to the open source community and making sure we can support customers no matter their platform of choice.
Azure Databricks — preview: Built in collaboration with the founders of Apache® Spark, Azure Databricks is a fast, easy and collaborative Apache® Spark-based analytics platform optimized for Azure. Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows and an interactive workspace. Native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB and Power BI simplifies the creation of modern data warehouses that enable organizations to provide self-service analytics and machine learning over both relational and non-relational data with enterprise-grade performance and governance. Customers inherently benefit from enterprise-grade Azure security, compliance and SLAs, as well as simplified security and identity control with Azure Active Directory integration. With these innovations, Azure is the one-stop destination to unlock powerful scenarios that make AI easy.
Microsoft joins MariaDB Foundation: Today we’re excited to be joining the MariaDB community as a platinum member of the MariaDB Foundation. As part of this membership, we’re committed to working closely with the foundation, actively contributing to MariaDB and the MariaDB community. We’re also announcing we’ll be delivering a preview of Azure Database for MariaDB, which will bring the fully managed service capabilities to MariaDB. Developers can sign up for the upcoming preview for Azure Database for MariaDB.
Azure Cosmos DB with Apache® Cassandra API — preview: With this preview, developers now get a Cassandra-as-a-service using the Cassandra SDKs and tools they are familiar with using the power of Azure Cosmos DB. Developers re-use existing code they’ve already written and build new applications using the Cassandra API against Azure Cosmos DB’s globally distributed, multi-model database service. Azure Cosmos DB has been designed to scale throughput and storage across any number of geographical regions with comprehensive SLAs and with greater consistency levels for more precise data latency management.
GitHub Partnership on GVFS: With GitHub, today we’re announcing Microsoft and GitHub are partnering to bring GVFS to GitHub’s 25 million users. GVFS is an open-source extension to the Git version control system developed by Microsoft to support the world’s largest repositories.
Helping developers be more productive
At Microsoft our mission is to empower every person and every organization on the planet to achieve more, and developers are no exception to this. We have a strong set of new announcements to help developers, as well as whole development teams, be more productive as they move into a world of continual innovation and continual development of their apps. At Connect(); we’re announcing:
Visual Studio App Center — general availability: The most comprehensive app development lifecycle solution for Objective-C, Swift, Java, Xamarin and React Native, Visual Studio App Center helps developers automate and manage the lifecycle of their iOS, Android, Windows and macOS apps. Developers can connect their repos and within minutes automate their builds, test on real devices in the cloud, distribute apps to beta testers and monitor real-world usage with crash and analytics data, all in one place.
Visual Studio Live Share — first look: Visual Studio is delivering the next major advancement in developer productivity with Visual Studio Live Share, which enables true real-time collaboration within both Visual Studio and Visual Studio Code. It lets developers seamlessly and securely share their project with other developers so that they can collaboratively edit and debug in real time together without having to sit in front of the same screen or in the same room. Rather than just screen sharing, Visual Studio Live Share lets developers share their full project context with a bi-directional, instant and familiar way to jump into opportunistic, collaborative programming.
Visual Studio Connected Environment for Azure Container Service (AKS) — upcoming preview: Visual Studio and Visual Studio Code will now use the Connected Environment for AKS features, making Kubernetes development a natural for Visual Studio developers. Developers will be able to easily edit and debug cloud native applications running on Kubernetes in the cloud with the speed, ease and full functionality and productivity they’ve come to expect from Visual Studio.
Azure DevOps Projects — preview: Available in the Azure management portal, Azure DevOps Projects will deliver a guided experience, helping developers easily explore the many Azure platform services available to help build their apps and in the process, configure a full DevOps pipeline powered by Visual Studio Team Services. In less than five minutes, this feature will ensure that DevOps is not an afterthought, but instead the foundation for new projects and one that works with many application frameworks, languages and Azure hosted deployment endpoints.
Take a look at how Columbia Sportswear is leveraging Microsoft’s developer tools and DevOps platform to drive their own digital transformation.
Putting AI in the hands of every developer
As AI becomes more pervasive and developers are able to harness the vast amounts of data being created every day, coupling with the power and scale of the cloud, we want to make it easy for developers to create the next generation of intelligent applications. We want to put AI in the hands of every developer with the tools and platforms they are most familiar with. With the announcements below, we’re delivering new AI tools and bringing machine leaning and intelligence to the edge.
Visual Studio Tools for AI — preview: This is an extension of our popular Visual Studio IDE, which will allow developers and data scientists to create AI models with maximum productivity. Visual Studio Tools for AI delivers debugging and rich editing, with the support of most deep learning frameworks such as Cognitive Toolkit, TensorFlow or Caffe. With this addition, developers and data scientists have a full development experience at their fingertips to create, train, manage and deploy models locally, and scale to Azure.
Azure IoT Edge — preview: Today we’re making available the preview of Azure IoT Edge, a service that deploys cloud intelligence to IoT devices via containers, and we’re introducing a new set of breakthrough cloud capabilities to run on IoT Edge, with Azure Machine Learning, Azure Functions and Azure Stream Analytics. Azure IoT Edge enables developers to build and test container-based workloads using C, Java, .NET, Node.js and Python, and simplifies the deployment and management of workloads at the edge. Azure IoT Edge can run on IoT devices with as little as 128MB of memory. As part of this announcement, we’re also releasing Azure Machine Learning updates, which enables AI models to be deployed and run on edge devices through the Azure IoT Edge service. Additional updates include easier AI model deployment on iOS devices with Core ML, as well as updates to the Azure Machine Learning Workbench tool.
Every year at Connect(); we get to share new tools and services that we hope will empower and inspire developers to build great apps. I encourage you to tune into Connect(); 2017 to learn more about all of the new innovations we’re announcing today, and to see what you can reimagine.
The post Developing for the intelligent cloud and intelligent edge at Microsoft Connect(); 2017 appeared first on The Official Microsoft Blog.
Just about every industry today is being transformed by Artificial Intelligence (AI). From retail and entertainment to transportation and healthcare, AI is seeping into our world in ever more profound ways, revolutionizing the way we go about our daily lives.
At Samsung Electronics, we share in the vision of a fully open, intelligent and connected world. A world where AI will play an integral role, where one day everything from our phones to our refrigerators will possess some sort of intelligence to help us seamlessly interact with our surroundings. To bring the company closer to realizing its vision, we are working tirelessly to develop technologies that can help us take this first leap into intelligence.
Since it was first envisioned in the 1950s, AI has made a palpable impact on our lives, giving us practical speech recognition, more effective web search and self-driving cars, among other innovations.
Earlier this month, Google’s AlphaGo AI program made news by mastering the ancient Chinese board game Go in just three days without any human assistance. This major advance comes just two decades after Deep Blue crushed chess grandmaster Garry Kasparov, illustrating that AI has not only come a long way in a short time, but is on track to creating unthinkable opportunities across all industries that will add new value to our lives.
The recent explosion in AI is enabled by a number of factors including a wider availability of GPUs, virtually infinite amounts of data, and more advanced machine and deep learning algorithms. Additionally, investments in AI have tripled from $26 billion in 2013 to $39 billion in 2016, further propelling the development of new intelligent technologies.
Despite these advancements, there are very real challenges that are hindering the development of AI technologies, including the lack of the required talent pool in the AI industry. Furthermore, many device manufacturers haven’t quite figured out how to best optimize the user data they receive from their sensor-equipped products. As a result, enterprises struggle to determine what AI is capable of and what kind of value it can bring to consumers.
Samsung, too, has contemplated how AI can deliver real value to its users, and in doing so have developed Bixby, a bold reinvention of its intelligent interface that’s even more ubiquitous, open and personal. Powered by the Samsung Connect, Bixby will act as the controlling platform of your connected device ecosystem, including mobile phones, TVs and even home appliances to make the smart home experience even smarter.
In fact, we are adding Bixby Voice to our Family Hub refrigerator. Now, you will be able to check the weather, build shopping lists and order groceries with the power of your voice. So, for example, if you were running low on milk, you would just say, “Hi Bixby, order milk” to order food directly from the screen.
The integration of Bixby Voice and Samsung Connect into the Family Hub refrigerator marks a big step – one that will offer developers tremendous opportunities to develop new content, applications and experiences in areas such as food, health, home management, entertainment and more.
We think this could be the fourth wave, where you have programmable objects dispersed throughout your entire home, seamlessly connected and communicating in a personalized and intuitive manner. In this way, we are moving beyond simply connecting devices to the Internet and are taking the next step by connecting devices to intelligence. This new era is what we are calling the “Intelligence of Things.”
A world powered by the Intelligence of Things will open up entirely new possibilities. In this world, every machine around you is intelligence-enabled, capable of understanding and anticipating your needs. In this world, mundane tasks are a thing of the past, allowing you to spend more time doing the things you enjoy with the people you love.
We know that we have a long way to go to fully realize our vision. But we are wholly committed to building upon our heritage of creating meaningful innovation and driving digital transformation to advance technologies in artificial intelligence. We could not be more excited to help lead the changes that will define this new, transformative era.
Samsung Electronics announced that it will support seven startups created by Samsung employees which are being spun off from the company’s C-Lab (Creative Lab) program on October 31. Including these seven businesses, a total of 32 C-Lab alumni startups have been created as a result of Samsung’s commitment to investing in employee-driven innovation and developing a startup ecosystem.
From the latest in VR/AR, IoT, healthcare and more, Samsung selected the seven new startups for investment based on their business potential and contribution to innovation:
The prospective entrepreneurs were provided with intensive training and preparation on key aspects of running a business with the help of experts before launching their startups. They also engaged in various talk sessions with former colleagues with success spinning off businesses to obtain know-how.
“We have provided the support to establish 32 C-Lab alumni startups over the past two years and based on our valuable past experience, we are planning to build up a more profound and actionable program to nurture employees’ ideas and launch new startups,” said Jaiil Lee, Vice President and Head of the Creativity & Innovation Center at Samsung Electronics.
C-Lab alumni startups have performed well in recent years, securing additional global funding, increasing company valuation and opening up unexpected business opportunities. For example, 360-degree camera manufacturer Link Flow started as a business for travellers in their 30’s and 40’s, but after being spotted by the security maintenance industry, underwent further iterations and will be unveiled as an official product at this upcoming CES 2018.
Created in December 2012, the C-Lab is an in-house startup incubation program that nurtures a creative organizational culture and innovative ideas among Samsung employees. The spin-off policy was introduced in 2015 and since its inception C-Lab alumni startups have been striving to open a new startup ecosystem.
Samsung kicked off this year’s Samsung Developer Conference (SDC 2017), held within the halls of San Francisco’s Moscone West convention center, by outlining its vision of an open, intelligent and connected world backed by an ecosystem of innovative devices and services.
SDC’s most diverse lineup of sessions and activities yet offered an SDC-record 6,000 developers, partners, business leaders and content creators a roadmap for innovating in the “Intelligence of Things” era. By spotlighting a “connected thinking” approach to innovation, and introducing exciting updates to its IoT, artificial intelligence (AI) and augmented reality (AR) technologies, Samsung hopes to arm developers with the tools and insights they need to help usher in a new phase of connectivity.
In case you missed it, here’s a rundown of five key factors that made this year’s SDC the best yet.
Day one of SDC 2017 kicked off with keynote speeches from Samsung and tech industry leaders, who began by revealing the company’s new, unified IoT platform: SmartThings.
The speakers outlined how the cloud-based platform, which combines three existing IoT services – SmartThings, Samsung Connect, and ARTIK – will offer seamless controls over IoT-enabled products and services, and serve as the foundation for one of the world’s largest IoT ecosystems. By providing access to one cloud API across SmartThings-compatible products, Samsung explained, SmartThings will ultimately allow developers to build connected solutions that reach more people.
Other highlights of day one’s keynote event included Samsung’s announcements of its plans to incorporate Bixby support into more Samsung and IoT devices; release Bixby 2.0 – an update for the intelligent assistant that’s more ubiquitous, open and personal; and advance its leadership in the field of AR through a partnership with Google.
Day two’s keynote speakers included Stan Lee, the chief creative force behind Marvel Comics, and Ariana Huffington, the founder and CEO of Thrive Global. Sharing their insights on topics including the creative process, innovation and “connected thinking”, the distinguished speakers inspired attendees to go further, break boundaries, connect people, and solve bigger problems.
SDC 2017’s sessions and activities focused largely on Samsung services and devices, intelligent technology trends, and exciting business opportunities. They also offered attendees a chance to listen to valuable insights from tech leaders on topics including AR, VR and game development, and on innovating with Samsung’s unified IoT platform.
The formats for the sessions, which numbered 49 in total, spanned everything from informative lectures, panels and speeches, to fun and engaging “Ask Me Anything” discussions. The sessions slate not only offered professional developers a wide array of enriching activities, but kids as well.
For SDC 2017, Samsung invited students from around the world to participate in an array of fun activities. Programs included the Samsung Kids Experience, which featured fun and immersive tablet-powered lessons, and Youth Track, which invited aspiring developers and creators from local high schools to take part in a special Gear 360 workshop.
SDC 2017 also offered developers the chance to examine software development kits (SDKs) for various Samsung apps and tools, and test out a wide array of the company’s latest offerings. These not only encompassed Samsung’s Galaxy Apps ecosystem, mobile tools, and services such as Samsung Pay, Samsung DeX and Samsung Knox, but also devices like its IoT-connected Family Hub refrigerators, Smart TVs, the Galaxy Note8, and the Gear VR headset.
Visitors to the Bixby section of the booth were able to experience the intelligent interface’s convenient functions through firsthand demos. On the other end of the exhibition floor was the SmartThings Developer Zone, which presented developer tools for the new, integrated IoT platform, and featured showcases for Samsung’s SmartThings partners and the ARTIK IoT chipset.
The SDK Bar offered info on the complete range of Samsung SDKs and APIs that were being showcased at the event, and directed attendees toward their corresponding sessions. The stream.Code101 section, meanwhile, allowed attendees to learn, via hands-on tutorials, how to develop apps and immersive experiences for products including the Gear VR, the Gear 360, Samsung Pay and Samsung Pass.
Along with showcasing Samsung’s newest intelligent technologies, SDC 2017 spotlighted several Samsung initiatives designed to inspire developers to dream up fresh innovations, and help communities across the globe. The hands-on exhibits for Samsung’s citizenship efforts showcased innovative ways that Samsung technologies are being used to create solutions that target healthcare, education, and important social issues.
This year’s SDC featured several attention-grabbing examples from partners around the world that utilized the Gear VR. These included the Spanish Ministry of Education, Culture and Sport’s #ByeByeBullying” campaign; a variety of health solutions developed by Nordic innovators; Limbic Life’s Limbic Chair VR, which has been used for stroke rehabilitation in Switzerland; and C-Lab’s Relumino, a Gear VR app that helps the visually impaired see the world more clearly. In addition, the exhibit’s Galaxy Upcycling section highlighted ways that Samsung products at the end of their life cycles may be used to create dynamic innovations.
SDC 2017 was jam-packed with fun and interactive activities that offered attendees an opportunity to relax, let loose, and enjoy immersive VR and live entertainment.
The conference featured a wide range of can’t-miss competitions that allowed coders, gamers, and entrepreneurs to test their skills to win bragging rights and gear. Contests included “SDC’s Got Code Talent”, an experts-only coding challenge with a Galaxy Note8 up for grabs, as well as the Soundcamp SDC Challenge, which saw contestants compose a signature song with the help of a real DJ in a quest to create SDC 2017’s top track.
In addition, the booth’s SDC Lounge included an LED “Google Play Music Wall” that visualized Google Play Music’s ability to recommend the right song for any moment. Speaking of music, the SDC Lounge also served as the stage for musician Daler Mehndi’s exclusive unveiling of India’s first VR music video. And of course, the lounge’s dedicated VRcade offered attendees more opportunities to enjoy immersive VR.
Day one of the conference closed with Samsung Celebration, an exclusive concert headlined by critically acclaimed indie artists Banks and Bonobo, while day two ended with a night of immersive content, networking, demonstrations, and can’t-miss experiences centered around VR.
And thus concludes another successful Samsung Developer Conference. For more info on the topics and innovations that were discussed at the event, check out Samsung Newsroom’s recap of SDC 2017’s keynote speeches.