advert

Fog Computing

The term “fog computing” or “edge computing” means that rather than hosting and working from a centralized cloud, fog systems operate on network ends. It is a term for placing some processes and resources at the edge of the cloud, instead of establishing channels for cloud
storage and utilization.
Fog computing tackles an important problem in cloud computing, namely, reducing the need for bandwidth by not sending every bit of information over cloud channels, and instead aggregating it at certain access points. This type of distributed strategy lowers costs and improves efficiencies.
More interestingly, it’s one approach to dealing with the emerging concept of Internet of Things (IoT).
Fog computing extends the cloud computing paradigm to the edge of the network to address applications and services that do not fit the paradigm of the cloud due to technical and infrastructure limitation including:
• Applications that require very low and predictable latency
• Geographically distributed applications
• Fast mobile applications
• Large-scale distributed control systems

Applications of fog computing

Tech giants like IBM are the driving force behind fog computing, and link their concept to IoT. Today, there might be hundreds of connected devices in an office or data center, but in just a few years that number could explode to thousands or tens of thousands, all connected and communicating. Most of the buzz around fog has a direct correlation with IoT. The fact that everything from cars to thermostats are gaining web intelligence means that direct user-end computing and communication may soon be more important than ever. The following are some of the practical examples:
Connected cars: It’s ideal for connected cars, because real-time interactions will make communications between cars, access points and traffic lights as safe and efficient as possible.
Smart grids: Allows fast, machine-to-machine (M2M) handshakes and human to machine interactions (HMI), which would work in cooperation with the cloud.
Smart cities: Fog computing would be able to obtain sensor data on all levels of the activities of cities, and integrate all the mutually independent network entities within.
Healthcare: The cloud computing market for healthcare is expected to reach $5.4 billion by 2017, according to a MarketsandMarkets report, and fog computing would allow this on a more localized level.
A closer look at fog computing shows that it is about taking decisions as close to the data as possible. Hadoop and other big data solutions have started the trend to bring processing close to the data’s location. Fog computing is about doing the same on a larger scale. You want decisions to be taken as close to where the data is generated and stop it from reaching the cloud. Only valuable data should be traveling cloud computing networks.
There are economical advantages to using fog computing. All that is needed is a simple solution (or multiple solutions) to train models and send them to highly optimized and low resource intensive execution engines that can be easily embedded in devices, mobile phones and smart hubs/gateways.
To achieve this goal, fog computing is best done via machine learning models that get trained on a fraction of the data on the cloud. After a model is considered adequate, then it is pushed to the devices. Algorithms like decision tree or some fuzzy logic or even a deep belief network can be used locally on a device to make a decision that is cheaper than setting up an infrastructure in the cloud that needs to deal with raw data from millions of devices.

What is next for fog computing?

Fog computing can really be thought of as a way of providing services more immediately, but also as a way of bypassing the wider internet, whose speeds are largely dependent on carriers.
Google and Facebook are among several companies looking into establishing alternate means of internet access, such as balloons and drones to avoid network bottleneck. But smaller organizations could be able to create a fog out of whatever devices are currently around to establish closer and quicker connections to compute resources.
There will certainly still be a place for more centralized and aggregated cloud computing, but it seems that as sensors move into more things and data grows at an enormous rate, a new approach to hosting the applications will be needed. Fog computing, which could inventively utilize existing devices, could be the right approach to hosting an important new set of applications.
However, the movement to the edge does not diminish the importance of the center. On the contrary, it means that the data center needs to be a stronger nucleus for expanding computing architecture. InformationWeek contributor Kevin Casey recently wrote that the cloud hasn’t actually diminished server sales, as one might otherwise expect. Hybrid computing models, big data and IoT have contributed to server requirements that may be shifting, but aren’t really abating as some experts had predicted.
The IoT is a relevant bridge to some of the biggest issues dividing the cloud and the fog (like bandwidth, which could lead to a hybrid fog-cloud model) as organizations seek to balance their enterprise-grade data center needs with support for increasing edge network growth.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...