The term edge computing refers to bringing computing processes such as computation and data storage closer to the “edge” of a network.
Traditionally, users or devices connected to a network would send data to a centralized location for storage and processing, but this approach introduces some latency, particularly now that any product or application can have users all over the globe.
Edge computing, then, describes an alternative approach – a distributed computing paradigm where data is processed in a location that’s physically close to its origin to minimize latency.
For example, consider an application with users all over the world. If that application relies on a centralized SQL database located in Chicago, a request from a user in Germany has to travel halfway across the world and back – almost 10,000 miles in total!
If, however, the application relies on a distributed database such as CockroachDB, a German user’s request might only have to travel a small number of miles to an EU-based node where it can be processed, greatly reducing latency for the user.
This reduction in latency is the driving force behind the move towards edge computing.