Before it was cool — a full two decades before Range Resources Corp. began seasoning its corporate presentations with terms like “machine learning” and “neural networks” — West Virginia University professor Shahab Mohaghegh became obsessed with the industry’s big data.
He started studying how machines could process data generated during oil and gas production, and learn to spot patterns. Those patterns could form the basis of predictive models that could help operators narrow down where to drill and estimate how much a well will produce.
When shale development started to bubble up in the early 2000s, Mr. Mohaghegh figured it would be the perfect candidate for machine learning.
It had the right ingredients, including advanced instruments at each well site producing monstrous amounts of data. Shale was also new geology, in the sense that while its properties have been known for years, the industry couldn’t produce large quantities of oil and gas from it before the marriage of horizontal drilling and high-volume hydraulic fracturing.
Here, traditional modeling may not paint the whole picture, he figured.
“We massage it, we change it to fit shale,” he said. “But it has very little to do with the reality, unfortunately.”
If Mr. Mohaghegh is right, that means metrics that banks, analysts and investors use to evaluate a company’s potential — such as how oil and gas production will decline over time — must be taken with a grain of salt or the “whole salt shaker,” as he suggests.
Traditionally, reservoir engineers model the potential production of a well by using formulas derived from general understanding of geology and physics. The formula dictates what data gets plugged into it, and only includes a small number of factors. In the universe of shale data, that leaves oodles of potentially meaningful information on the cutting room floor.
“Shale has been incredibly nice and incredibly generous to our operators by providing so much oil and gas regardless of what we do to it,” Mr. Mohaghegh said.
But what comes out of the well is still a small fraction of what’s in the ground. As much as 40 percent of the fractures blasted in the rock from a well bore don’t produce any gas.
Can a computer crunch enough data to figure out why?
Going to Silicon Valley
Last year, some of the biggest oil companies in the world — including BP and Royal Dutch Shell — appealed to a group of Silicon Valley entrepreneurs at the annual TiEcon event to “learn what we do so you can build what we need.”
Neal Dikeman, senior venture principal with Shell, delivered the missive.
“We will not be running oil companies in 2030 like we’re running them now. You can’t. They’re not smart enough,” Mr. Dikeman said.
Instead, he said, every point that can be metered should be metered, and all those instruments should be “dirt cheap” and seamlessly transmit data into a kind of corporate brain -— a job for the speed-driven ethic of Silicon Valley.
Back in 2011, when Texas-based oil and gas service firm Baker Hughes opened its Palo Alto Innovation Center, machine learning wasn’t really on the radar of most industry players. In 2014, Baker Hughes tapped Sammy Haroon as the center’s director. His vision was to put advanced data analytics at the center of its pursuits.
Last year, Baker Hughes rolled out software that, among other things, analyzes conditions around electric submersible pumps helping lift oil out of wells and alerts well operators in advance of a potential problem.
The software crunches data on historical pump performance and other variables, like pressure and temperature. Learning from that, the program can say what kinds of conditions resulted in the pump breaking a high percent of the time.
Mr. Haroon, a nuclear engineer, wrote on his blog in December, that big data analytics still has “skeptics a-plenty” in the oil and gas world.
“This includes folks to whom a Microsoft Excel (spreadsheet) is the ultimate analytics tool, and scientists who believe that complex physics cannot be complemented (and in some cases, as I believe, replaced) by machine learning driven, continuously evolving models,” he said.
Another major stumbling block for the industry is data confidentiality, Mr. Mohaghegh said.
Landmark, a Halliburton business line, builds machine learning concepts into software products that help operators collect and sort data. But it doesn’t aggregate data from different operators — in fact it doesn’t analyze any data collected through its apps.
“We ought to be able to do that but we’re not doing it yet,” said Chaminda Peries, a senior development manager with the company.
The operators aren’t yet at the point where they’re comfortable contributing to such an effort.
“They’ve traditionally been very guarded about their data security,” he said. “At the end of the day, it needs to come from the management of that company.”
Nevertheless, Mr. Peries said he’s seen much broader acceptance of big data analytics.
The hype around oil and gas helped propel big data into the spotlight, he said, and it was helped along by people’s everyday exposure to learning algorithms. A smartphone’s traffic app uses them; Linked In pages use them. When an online store reminds a customer that the family might be out of diapers, their machines have digested data and learned to anticipate when the next order might come.
In a study published last year, Mr. Mohaghegh used geologic and operational data from a group of 200 wells in southwestern Pennsylvania to gauge which well design factors contributed most to production.
The results showed that the operator, intentionally or not, changed how much water and sand was pumped into wells in lower quality shale areas vs. higher quality rock. The inconsistency led to worse production than would be expected from the higher quality shale wells.
Producers have increasingly narrowed their operations to their sweetest spots — the areas with the best rock — to maximize returns in a low commodity climate.
If, as Mr. Mohaghegh’s analysis of those wells in southwestern Pennsylvania shows, a company isn’t listening to its data, it may be shortchanging its most promising wells.
Over the years, Mr. Mohaghegh’s work has gained traction in the oil and gas industry. He helped Anadarko Petroleum analyze data to determine how to coax more oil out of its wells. Working with data from an international oil and gas firm, Mr. Mohaghegh determined which oil service contractor, such as Halliburton or Baker Hughes, did the best job at fracking its wells.
Range Resources has said it is working with another oil and gas analytics firm to glean insights from its data, after a federally-funded study with Mr. Mohaghegh several years ago helped the Texas-based company dip its toe into the possibilities.
At Southwestern Energy Corp., a Texas-based operator that cut its shale teeth on the Fayetteville Shale in Arkansas before shifting most of its focus to the Marcellus, bid data analytics is ramping up across divisions even as drilling has been suspended until it is more economical.
Southwestern started taking stock of the data it has in 2010. Two years later, it began collecting information with the express purpose of running it through machine learning algorithms.
“Now, we have enough wells, enough geologic information, that we can begin to put those together to discover what we don’t know,” said Mark Reynolds, senior solutions architect and knowledge engineer, who pioneered the effort.
For Mr. Reynolds, it became obvious looking at the data that the mechanics of shale didn’t yield to traditional models. In 20 years, he said, a different oilfield boom may render obsolete or, at best, incomplete all the groundbreaking insights being gathered today.
“When you talk to the generals, they say they’re always prepared to fight the last war,” he said. ”That’s just human nature.”
Anya Litvak: email@example.com or 412-263-1455.