Consumers have an insatiable appetite for advancements that improve their convenience, safety, and user experience. We see that in an obvious way with the human interface, which has evolved over the years from being purely tactile, to include a wider range of input methods from voice to gesture to video and various computer vision capabilities, everywhere from sales terminals to smart homes. The next step in this will be devices that not just understand direct commands but can infer intent.
In parallel, the gnawing concerns over security and latency of traditional cloud-based connected devices have paved the way for more edge-based processing. This is especially true in human-machine interface (HMI). But local processing adds another wrinkle for technology developers who must consider the specific use case requirements, development options, and cost of smart (machine learning trained) devices that introduce new levels of automation to power perceptive intelligence and ambient computing.
Edge AI is the foundation
The foundation enabler for a more sophisticated, user friendly and safer IoT experience is what has commonly been called edge AI. By definition, edge AI implies that the AI processing is running within the end product itself (a set-top-box or smart display, for example) and not in the cloud. The rationale for this is well understood — better privacy, less bandwidth, faster response times, even eco-friendliness as edge processing reduces the energy, water, and other resources to run massive data centers.
Edge AI has been adopted in many applications that touch our lives every day, but initial uses have largely been limited to expensive products, such as smart phones and cars. As a result, the edge AI implementations targeting these products are also expensive and have been out of reach for consumer retail devices for the smart home. And for the most part, the existing Edge AI applications are one dimensional in terms of the user experience they offer — AI-enabled vision in an ADAS application, or picture quality enhancement in a mobile phone, for example.
What would be the compelling reasons for creating and adopting Edge AI solutions for the smart home?
HMI driving edge AI in the home
We’re seeing a particularly strong interest and growing array of use case opportunities in the ubiquitous consumer IoT segment — a catch-all term for various entertainment, communication, home automation, security, and sundry other devices, appliances, and gadgets that we increasingly rely on. Especially in current times, consumers want a connected experience but without the cost, privacy, and performance issues of being traditionally connected. The desire for a more immersive and perceptive human-machine interaction is a key factor driving the need for edge AI in the smart home.
With a smart home-focused AI-based edge computing solution in the market, the performance needed to create a more human-like experience will be available for a wider range of products.
Real-world examples that benefit from Edge AI in the Smart Home are plentiful. Some have an obvious practical benefit. A home doorbell camera that can tell the difference between a package drop and a package theft. Entertainment devices that can automatically detect and upscale low-resolution video streams to a higher resolution with excellent perceptual quality, making better use of high-resolution TV displays. Even familiar and now nearly ubiquitous video conferencing applications can be enhanced with higher quality video and audio and made available on cost-effective devices.
Other examples may seem more futuristic. A refrigerator that can provide suggestions of what to make for dinner based on contents within the fridge. An oven that can tell you when your meal is cooked to perfection. A virtual personal home yoga trainer that can remind you to straighten your arms during a pose. Home automation devices that work together to anticipate the homeowner’s needs, from heating the house, to preparing food, to choosing what to watch on TV.
Such solutions can combine video, vision, and voice sensors with AI processing capabilities to bring enhanced functionality to a new generation of familiar devices such as smart displays and soundbars, set-top-boxes, appliances, and security cameras.
What each of these applications has in common is the need for an edge-based AI-based solution that is specifically tailored for the smart home, and not smartphone or automotive applications. To further democratize Edge AI, a solution needs to be:
- able to support a multi-modal AI-enhanced user experience, combining voice, video, and vision in an efficient system;
- more accessible to a wide range of AI developers and innovators through standard tools;
- ensure that the security and privacy measures address consumer expectations.
Smart home HMI requires a multi-modal approach
As we discussed earlier, Edge AI-based solutions for smartphones and automotive applications primarily focused on camera vision use cases. However, in the smart home, a multi-modal HMI is a critical element in enhancing the user experience in this new era of connected devices. Take the example of a set-top box. This application would require video AI, perhaps in the form of video enhancements as discussed earlier. It would also require voice AI to be able to identify through their voice commands who is watching the TV and configure the experience accordingly, for example making it easier to select your favorite shows. It may even require vision AI, with a built-in camera that enables an enhanced and intuitive video conferencing experience while chatting with distant family members.
The ideal solution would be a smart home focused SoC that can support high performance video, voice and vision processing together with an integrated AI accelerator. The Synaptics VS600 SoC family is an example of such a solution. Such an approach is not only optimized to meet the multi-modal AI performance requirements for smart home applications, but also having this all integrated into a single chip makes it accessible to common household products sold at consumer market price points.
This needed solution begins with an SOC platform that integrates multiple types of processor engines: CPU, NPU, GPU, and ISP as well as hooks to high performance cameras and displays. Such an architecture enables the desired combination of highly secure, low-cost inferencing and real-time, multi-mode performance. The Synaptics Edge AI family is a series of SoCs that are each highly targeted for their given consumer application. Each SoC in the family integrates the required processing cores together with the appropriate level of integrated AI performance for that application.
The post EETimes – Edge AI Solutions for Smart Homes Can Transform HMI – – EE Times appeared first on abangtech.
source https://abangtech.com/eetimes-edge-ai-solutions-for-smart-homes-can-transform-hmi-ee-times/
No comments:
Post a Comment