Technology Speeds Headlong Toward the Edge
Adam Wilson, Special Publications Editor | 24 February 2020
The term “edge computing” may be relatively new, but the concept is not. Even the oil and gas industry, notoriously slow on technological adoption, has been working on the edge for some time, although it only recently started using the term.
“Edge is not new for us,” said Dave Lafferty, president of Scientific Technical Services. “Conventional sensors are actually edge devices.” In fact, “a rig is the ultimate edge device,” added Andrew Bruce, chief executive officer of Data Gumbo, “because you’ve got everything right there.” They were speaking recently at the second annual Edge Computing Technologies in Oil and Gas conference in Houston.
Edge computing is simply the use in the field of a computer that communicates, either wirelessly or through wires, with an enterprise site somewhere else. Connecting devices to these computers provides advantages such as reduced bandwidth and latency. “Rather than mindlessly streaming data up to the enterprise, what you can do is process that data locally and then send refined results back up to the cloud or the enterprise,” Lafferty said. “That can greatly reduce the amount of bandwidth that you need.”
These computers can also perform some autonomous actions. “It can be very simple, maybe just things like a mean or a standard deviation, to very complex analytics,” Lafferty said. Having certain actions performed autonomously, he added, “means you can spend a lot less on your telecommunications.”
Security can also be enhanced at the edge. “Now that you have the horsepower in the field, you can do things like encryption and certificates,” Lafferty said. “Now you have the tools where you can actually have a high degree of integrity.” He added, however, that security should be “built in, not bolted on. What I see is most projects fail because they don’t consider security until the very end, and then they slap a firewall in front of it and call it good. And, of course, it doesn’t pass the audit.”
One of the greater benefits of edge computing is its ability to facilitate real-time analysis. “Real time is key,” said Hani Elshahawi, digitalization lead for deepwater technologies at Shell, “and, because real time is key, it favors the edge.”
As with any technological advancement, a few hurdles stand in the way, and most boil down to cost. To begin with, the cost of deployment at scale keeps a lot of edge projects from getting off the ground. Lafferty explained it this way: “Someone in isolation in some lab somewhere does five devices, and the boss says, ‘Great! Now let’s deploy 5,000 of them.’ Well, I can’t do that. It took me 2 years to get all this working.”
The cost and complexity of implementation is just the first hurdle. Once the system is deployed, the hurdle created by the cost of ownership looms. “Now that you have things like Linux out there, it has to be patched,” Lafferty said. “If you’re not patching, you have security issues. You have software-defined controllers that you’re pushing software back and forth and updates. So you have a lot of complexity in the field, and if you’re driving out there with a USB stick to reflash your device, it’s going to cost you between $500 and $1,000 per update times however many devices you have in your operation. You can see how quickly that value proposition goes away. If you’re offshore, it’s closer to $10,000 an update.”
Countering the cost hurdles are strong capabilities, untapped potential, and enabling concepts and technologies. Many edge devices have been around for years, although their contributions have been minimal. “We have a huge amount of embedded systems that we don’t take advantage of,” Lafferty said. “Every motor, every engine that we have has some sort of CAN [controller area network] bus on it, a Cat [Caterpillar] datalink or things like that. And these things take about 300 to 400 readings per second. It’s a whole wealth of information that we’re not utilizing. And, to unlock that potential, you really need edge because you need to 1) get it off the iron, but 2) put it in a form that’s useful.”
What Lafferty calls “software-defined controllers” can increase the usefulness of edge computing. Also called universal well controllers, these are pieces of hardware that can perform different functions, depending on the software that is loaded. “So, now you don’t have specialized pieces of equipment,” he said. Using artificial lift as an example, Lafferty said, “you initially would run the well on gravity drain, so maybe all you’re worried about is a flowmeter. But then, as you go to ESP [electrical submersible pump], you want an ESP controller; you just put software on that same device, and now it operates as an ESP controller. And then you transition to rod lift, you take that software and load the rod-lift controller software. So, one device can follow the asset along.”
Lafferty pointed out that these software-defined controllers tend to consist of commodity hardware using commodity operating systems, rather than proprietary, “which means it’s dirt cheap.”
The concept of “containers” also can boost usefulness at the edge. With this concept, all devices contain their own copy of the operating system, meaning that, if one crashes, it doesn’t take the others down with it. These containers also have their own container input/output, “which means that I can now, very discretely, control who that piece of software talks to,” Lafferty said. “With containers, you can bring in another instance of a container with just one instruction. So, you can actually do an update. Say you’re flying a drone, [you can] update the guidance system while it’s in the air using containers. It’s that quick.”
While some edge devices may already be in place, and vast quantities of data already are being gathered, the ideal amount of edge computing to be used is still being debated. “Many have been hypothesizing that the edge will eat the cloud,” Elshahawi said. “I think it will eat part of it.”
Some say all collected data should be streamed to the cloud to be handled offsite. “Then you had a lot of edge people saying, no, you don’t need any kind of enterprise. We’ll just do everything in the field,” Lafferty said. “But, in fact, the answer is yes to both.” The immediacy and proximity to the data source makes some actions more appropriate for the edge, Lafferty said, while, for other more local trends, sending refined data up to the enterprise site makes more sense.
Another argument for processing data on the edge is the burden of moving large quantities of data to an enterprise site through the cloud and processing it there. Lafferty gave the example of a program that records high-frequency vibration data. “High frequency” here means 360 vibration measurements for every turn of a crankshaft. “If you tried to send that amount of data up through the cloud and process it in the cloud, it just wouldn’t work,” Lafferty said. Instead, the program in Lafferty’s example sends that information to a local controller, where a model is used to compute about 80 parameters. These parameters, which include device health and performance and maintenance indicators, are then sent to the enterprise site to be added to dashboards for workers to review. “This is a really good example where you can leverage both the local edge processing to refine the data but then send that refined data up to the enterprise such that it becomes an actionable form,” he said. “And now you can start to see trends across your operations.”
All of these edge devices and activities require a fair amount of orchestration, which, in this instance, means automating updates and security. “It’s very important,” Lefferty said, “because doing five is easy, doing 5,000 is hard. And if you don’t have orchestration, if you don’t have a central management of things like your network, your security, your updates, it’s almost impossible to get value out of edge.”