NTT Tests The ‘Edge’ To Build A Sharper Internet Of Things – Forbes

npressfetimg-785.png

IN SPACE – AUGUST 15: In this handout photo provided by NASA, Astronaut Rick Mastracchio, STS-118 … [+] mission specialist, participates in the mission’s third planned session of extravehicular activity (EVA) as construction and maintenance continue on the International Space Station August 15, 2007 in Space. During the 5-hour, 28-minute spacewalk, Mastracchio and astronaut Clay Anderson (out of frame), Expedition 15 flight engineer, relocated the S-Band Antenna Sub-Assembly from Port 6 (P6) to Port 1 (P1) truss, installed a new transponder on P1 and retrieved the P6 transponder. (Photo by NASA via Getty Images)

Getty Images

Your laptop computes. It would do, after all… it’s a computer. But some of our planet’s compute processing capacity is now being carried out in so-called ‘edge’ computing zones, in remote machines such as sensors and other smart devices.

Not necessarily synonymous with the Internet of Things (IoT), edge computing happens ON IoT devices devices themselves, hence the need and validation for a separate term. Some edge computing happens on sophisticated large-scale devices from hospital equipment to digital device installations at oil & gas facilities – and some of it just occurs on your smartphone. The common link that bonds both scenarios here is that neither necessarily gets to enjoy a link to a cloud datacenter i.e. the calculations and computations have to happen locally on the devices in the first instance, hence it’s all out on the edge.

But, as remote as it inherently is, how do we test edge computing and make sure that our smart (often smaller) machines are doing what they should be doing?

Paramount focus – the platform

For NTT’s Parm Sandhu, it comes down to many factors, but he is able to pinpoint a number of key trends and practices. In his role as vice president for enterprise 5G products and services at NTT Ltd, Sandhu says that governance, observability and management of the underlying hybrid edge compute platform in use alongside its deployed application layer is of paramount focus. Today, there’s a widespread trend for software tools in this layer to attempt to provide an automated ‘single pane of glass’ to manage multi-cloud hybrid hyperscaler environments as well as the edge computing estate.

“An enterprise’s mission-critical applications require guaranteed performance Service Level Agreements (SLAs) from its underlying edge compute platform. Enterprise technology managers require demonstrable capabilities that guarantee the underlying edge platform can meet the application performance requirements before they can confidently move mission-critical applications onto (or into a position where they are integrated with) an edge compute platform,” said NTT’s Sandhu.

He explains that guaranteed performance SLAs can only be delivered when the right operating system (OS) and hardware blueprint methodology are employed at the design and deployment phases. Then, the cloud management software in use must also be capable of demonstrating that the specified SLA is capable of being realized and achieved during the RUN phase when edge devices are powered up.

Much needed mimicry stage

NTT’s Yusuf Mai concurs with his colleague’s thoughts. In his role as NTT VP of solution architecture, he also feels that one of the bigger challenges in this space comes about when a business looks to test and validate its edge software, there’s all too often a lack of a staging (or certification) environment. This is a computing environment that mimics what will eventually be the live ‘production’ environment as closely as possible.

But why is it hard to mimic a real production environment?

“Even if you can have a backup of data from already experienced real world data archives that has passed over (and through) the edge compute layer, it is still difficult to replay the data ingestion workload with the same data traffic pattern,” specified Mai. “This is because data is usually ingested from many data sources… and data produced from all these data sources depends on many ‘upstream’ events [a multiplicity of potential actions of machines, databases and users that occurs prior to the edge device needing to do its job].

There’s a lot of focus here on precision engineering at the data level, so why is this all so important? A lot of the reason comes down to the fact that when it comes to an edge application (which is latency-sensitive, time-sensitive and exists in a multi-input multi-output system), some issues only arise when the sequence of events follow a certain path, under a certain timing.

Temporary fixes become fixtures

“This means no matter how much testing is done, some production defects can still be missed during the quality assurance phase,” explained Mai. Moreover, in the case of a defect being discovered in production, the engineer may not be able to recreate the exact same defect to uncover the root cause. If the root cause cannot be found, the next best alternative is to apply a temporary fix, which will live in the system forever.”

But edge computing and the IoT is not the only part of our universe where temporary fixes become fixtures. As a randomly disconnected example, Egypt’s capital city is threaded with ‘kubri’ flyover bridges, some of which (so local opinion has it) were only ever designed to be temporary, but today they still stand.

Back in our edge compute world, we can see that sometimes, the cost of building a complex simulation to thoroughly test a device or service is actually prohibitive. The consensus here is that we should temporary fixes as minimally as possible i.e. they typically further degrade and wear out over time.

What’s our edge future?

If we’ve got this far then, how should we view the edge-powered IoT future? Mai and the NTT team advise that edge computing used to be dominated by silo’d disconnected ‘point’ solutions from one single corporate vendor – and back then, things were almost simpler.

“Today, an average end-to-end edge computing solution consists of components from multiple vendors. Add to that the fact that solutions are getting more complicated with multiple product and services vendors being used in the mix… and you can see that we have a lot to manage. Then think about the need to observe confidentiality, IP ownership and liability and you have another complex layer of obstacles on top of an already technically complex solution,” advised Mai.

The wider trends at play here point to the tighter use of technology policy.

Sometimes now enforced and managed via a Policy-as-Code approach as we have explained here, we are moving to an IoT edge world where IT governance should require a business (and indeed the edge device vendor supplying that business) to comply with certain policies.

For example, sums up NTT’s Mai, all edge device application modules could be run in an environment where they must be deployable in containers and so can be managed through certain Edge Application Managers technologies – a formalized new approach to computing that we can already see being capitalized as EAM. If we can build our smart edge systems with some (or all and more) of these elements, then we can perhaps test them better, run them better and (when they go wrong) get the errors sorted with shorter Mean Time To Resolution (MTTR) figures.

As we build the IoT with edge compute power, these devices – or at least the central computing engine in them – are often smaller machines as standalone pieces of technology, but getting them to run efficiently is a big job.

Source: https://news.google.com/__i/rss/rd/articles/CBMicmh0dHBzOi8vd3d3LmZvcmJlcy5jb20vc2l0ZXMvYWRyaWFuYnJpZGd3YXRlci8yMDIzLzAxLzE3L250dC10ZXN0cy10aGUtZWRnZS10by1idWlsZC1hLXNoYXJwZXItaW50ZXJuZXQtb2YtdGhpbmdzL9IBdmh0dHBzOi8vd3d3LmZvcmJlcy5jb20vc2l0ZXMvYWRyaWFuYnJpZGd3YXRlci8yMDIzLzAxLzE3L250dC10ZXN0cy10aGUtZWRnZS10by1idWlsZC1hLXNoYXJwZXItaW50ZXJuZXQtb2YtdGhpbmdzL2FtcC8?oc=5