The hype around edge computing is growing, and for good reason. By moving compute and storage closer to where data is generated and consumed, such as IoT devices and end-user applications, organizations are able to deliver low-latency, reliable, and highly available experiences to even the most bandwidth-hungry, data-intensive applications.
While delivering fast, reliable, immersive, seamless customer experiences is one of the key drivers of technology, another reason that is often underestimated is that edge computing helps organizations comply with strict data privacy and governance laws that holding companies responsible for transferring sensitive information to the central cloud servers.
Improved network resiliency and bandwidth costs are also driving adoption. In short, without breaking the bank, edge computing can enable applications that are compliant, always on and always fast – anywhere in the world.
TO SEE: Research: Digital Transformation Initiatives Focus on Collaboration (Tech Republic Premium)
Unsurprisingly, market research firm IDC predicts that edge networks will account for more than 60% of all deployed cloud infrastructures by 2023, and global spending on edge computing will reach $274 billion by 2025.
Plus, with the influx of IoT devices – the state of IoT Spring 2022 report estimates that approximately 27 billion devices will be connected to the internet by 2025 – companies have the opportunity to use the technology to innovate at the edge and differentiate themselves from competitors.
In this article, I will review the progress of edge computing implementations and discuss ways to develop an edge strategy for the future.
From on-premises servers to the cloud edge
Early instances of edge computing deployments were custom hybrid clouds. Supported by a cloud data center, applications and databases ran on on-premises servers deployed and managed by a company. In many cases, a standard batch file transfer system was typically used to move data between on-premises servers and the backing data center.
Between capital costs and operational costs, scaling and managing on-premises data centers can be out of reach for many organizations. Not to mention there are use cases, such as offshore oil rigs and airplanes, where setting up on-premises servers is simply not feasible due to factors such as space and power requirements.
To address concerns about the cost and complexity of managing distributed edge infrastructures, it is important that next-generation edge computing workloads leverage the managed edge infrastructure solutions offered by major cloud providers, including AWS Outposts, Google Distributed Cloudand Azure Private MEC.
Instead of multiple on-premises servers storing and processing data, these edge infrastructure offerings can do the work. Organizations can save money by reducing the cost of managing distributed servers while taking advantage of the low latency that edge computing provides.
Furthermore, offers such as AWS Wavelength allow edge deployments to take advantage of the high bandwidth and low latency features of 5G access networks.
Using managed cloud edge infrastructure and access to high-bandwidth edge networks solves part of the problem. A key element of the edge technology stack is database and data synchronization.
In the example of edge deployments that use outdated file-based data transfer mechanisms, edge applications are at risk of running on old data. Therefore, it is important for organizations to build an edge strategy that considers a database that is suitable for today’s distributed architectures.
Using an edge-ready database to empower edge strategies
Organizations can store and process data in multiple layers in a distributed architecture. This can happen in central cloud data centers, cloud edge locations, and on end-user devices. Service performance and availability get better with every tier.
To that end, embedding a database containing the application on the device provides the highest levels of reliability and responsiveness, even when network connectivity is unreliable or non-existent.
However, there are cases where local data processing is not enough to gain relevant insights or where devices are not capable of local data storage and processing. In such cases, apps and databases distributed to the cloud edge can process data from all downstream edge devices while taking advantage of low latency and high bandwidth pipes of the edge network.
Of course, hosting a database in the central cloud datacenters is essential for long-term data persistence and aggregation across edge locations. In this multi-tier architecture, the amount of data backhauled to central databases over the Internet is minimized by processing most of the data at the edge.
With the right distributed database, organizations can ensure that data is consistent and synchronized at every level. This process isn’t about duplicating or replicating data across each layer; rather, it is about transferring only the relevant data in a way that is not affected by network outages.
Take retail for example. Only store-related data, such as in-store promotions, is transferred to retail edge locations. The promotions can be synchronized in real time. This ensures that store locations only work with data that is relevant to the store location.
TO SEE: Microsoft Power Platform: What You Need to Know About It (Free PDF) (TechRepublic)
It is also important to understand that data management in distributed environments can become challenging. At the edge, organizations often deal with ephemeral data, and the need to enforce policies around data access and retention at the granularity of an edge location makes things extremely complex.
Therefore, organizations planning their edge strategies should consider a data platform capable of granting access to specific subsets of data only to authorized users and implementing data retention standards across different levels and devices, all while keeping sensitive data never at the edge. leave.
An example of this is a cruise line that gives a sailing vessel access to travel-related data. At the end of the trip, data access is automatically revoked for cruise line employees, with or without an internet connection, to ensure data is protected.
Come on, edge first
The right edge strategy enables organizations to take advantage of the growing ocean of data coming from edge devices. And as the number of applications at the edge grows, organizations looking to be at the forefront of innovation must extend their core cloud strategies to include edge computing.

Priya Rajagopal is the director of product management at couch, (NASDAQ: BASE) a provider of a leading modern enterprise application database that relies on 30% of the Fortune 100. With over 20 years of experience building software solutions, Priya is co-inventor of 22 technology patents.