In this short article, you will learn everything a CEO needs to know about machine learning. How Machine Learning works. After that what you can do with it. In conclusion how to get started.
It is all about connecting A to B
What a CEO needs to know about machine learning? How Machine Learning works. After that what you can do with it. In conclusion how to get started. Therefore, information on ML is often confusing and sometimes downright misleading (I’m looking at you, Microsoft). But ML for business is simple:
“99% of the economic value created by AI today is through one type of AI, which is learning A to B or input to output mappings.” Andrew Ng
Almost every business application of ML today is about learning to produce certain outputs from certain inputs:
How does ML learn to predict the correct output?
You collect examples of input –> output pairs (the more the better).
The ML algorithm learns the connection between input and output (that’s the magic).
You apply the trained algorithm (the “model”) to new input data to predict the correct output.
In other words, almost everyone who uses ML to make $$ does it exactly like this.
Use ML when you have a lot of data
ML needs data
Therefore, ML is powerful because it turns data into insights. But it is less efficient at learning than people are (yes, way less efficient), so it needs a lot of data in order to learn. If you have lots of data, you should think about it!
Data is a competitive advantage, not Algorithms.
However, that’s why Google and Facebook have no problem open sourcing their algorithms. But they definitely don’t open source their data. If you have a lot of data no one else has, then that’s the perfect opportunity to build a unique ML system.
The 3 simplest ways to find your ML use cases
For instance, you have a lot of data. Now, what do you do? Here are the 3 best ways I know to discover ML use cases:
1. Improve automated decision-making
If you are wondering what the CEO needs to know about machine learning. Where do you have software that automates rule-based decisions?
For example:
Call Routing
Credit Scoring
Image Classification
Marketing Segmentation
Product Classification
Document screening
There is a good chance ML can improve the accuracy of these decisions because ML models can capture more of the underlying complexity that connects A to B. By comparison when you write rules into software manually (the traditional way), you can only encode rudimentary dependencies.
2. Things people can do in < 1- second
Another great heuristic I first heard from Andrew Ng is:
“Pretty much anything that a normal person can do in <1 sec, we can now automate with AI.” Twitter
So what are some things humans can decide in < 1 sec?
Who’s in that picture?
Do I have a good feeling about this potential customer?
Does this look like a problematic CT Scan?
Many jobs are a sequence of < 1-sec decisions. Like driving:
Is that person going to cross the street?
Am I too close to the sidewalk?
Should I slow down?
… and many, many more.
Anything you can do in less than 1 second, ML can most likely do too (or it will be able to soon).
3. Get inspired by Kaggle competitions
Large corporations like Zillow, Avito, Home Depot, Santander, Allstate, and Expedia are running data science competitions on Kaggle. These are challenges they want outside data scientists to solve. So these competitions give you an idea of what types of AI solutions they are working on. It’s really a great resource.
Building a good data science team is super hard (and expensive!)
Many companies struggle (and ultimately fail) to build an efficient data science team. Why is it so hard?
Misinformation about who to hire
Tough competition for talent
Few managers who can lead data science teams effectively
In addition…
You don’t know yet whether you need a data science team
You might have a lot of data and a lot of ideas, but that doesn’t mean you need your own data science team. Only after you’ve built your first AI systems will you really know how much manpower you’ll need in the long run?
Above all build something that works — fast
Your first goal should be to pick the lowest hanging AI fruit and finish it quickly. This gets you miles ahead:
You’ll achieve the tangible success that gets investors, the board, and your team excited;
Get to know the taste of ML and get real-life experience of what works and what doesn’t;
Have better ideas about where to use AI next.
With that first finished system under your belt, you are in a much better position to hire and train a data science team. Or maybe you find that you don’t want (or need) to hire a whole team because that low-hanging fruit already accounted for 80% of what you can feasibly do with AI.
ML teams should work across the whole company
However, If you do build an AI team, you should build it for the whole company, not just one department. A horizontal AI team.
In other words, don’t think in departments
AI experience is very transferable: To a data scientist, your CRM data looks almost the same as your inventory data. And to an algorithm, they look even more similar. It makes a lot of sense to build one AI task force for the whole firm.
Side note: For that to work, you should also have your data in one place!
More life-saving tips
In other words, don’t listen to people selling a “better algorithm”
Either they have no experience, or they’re trying to sell you open source for a markup. In AI especially, everyone is cooking with water, meaning they’re using publically available algorithms.
Above all focus on business experience & engineering quality
Also, Work with someone who takes the time to really understand the business problem you’re trying to solve and has high standards when it comes to engineering quality. However, If you want lots of results with fewer headaches, then the plumbing is more important than the cute algorithm that flows through it.
Magic Triangle meetings 😋
In other words, the best ideas develop when you put three types of people in a room:
A person who’s in touch with the current business priorities (you?),
A person knows the data you have (your database engineer), and
Someone who has lots of practical experience building ML systems.
For instance, together, these three people can make realistic plans, really fast.
In conclusion:
AI is about connecting A to B.
Look into processes that involve a lot of data and < 1-sec decisions.
Get the first win before you plan big.
In conclusion, you learned everything a CEO Needs To Know About Machine Learning. In other words, If you need help along the way, drop me a line. I’m always happy to hear about exciting challenges: manja.bogicevic at kageera.ai
6 APPLICATIONS OF MACHINE LEARNING IN OIL AND GAS? The issue of environmental sustainability is a major concern for governments and players from the oil and gas industry worldwide. The negative impacts as a result of activities carried out by oil and gas companies have been a major tool not only for the livelihood and health of people but more so to the environment, such as pollution.
The risk of environmental pollution, hazards and severity can be and will be reduced with the use of machine learning and deep learning in the years to come. There has been an amazing application in chemical engineering, process safety, process control tuning, advanced dynamic and process optimization. We can see that the Oil and Gas industry when we talk about software especially machine learning solutions didn’t have any innovation in the last 15 years.
If you are a manager in the Oil and gas, you are afraid that Hazzard will happen and you can’t reduce the risk. What to do about it?
1. Predictive analytics for MPPL (Multiple Pipeline Product)
Model-based, predictive analytics lends itself to crunching multiple data sources to pinpoint risks for pipeline integrity management, while analytics for process monitoring and measurement evolve to better discover crucial variances. Kageera and Optimize Global Solutions vendors are starting to use rules and heuristic techniques to spot deviations impacting the accurate understanding of flow, production, and gas quality. Through the ability to set thresholds on deviations, measurement software can help spot problems such as missing data, suspect data, or uncollected data.
Predictive analytics for pipeline integrity is emerging, but engineers must advise analytics experts on underlying data sources and configuring KPIs.
Measurement
is crucial to the midstream, as are ways of converting measurement and flow data into production reports. With solutions, look for analytics and reports that can quickly update trends and generate new reports as new data from metering or gas analysis become available.
The foundation for analytics goes all the way back to plant and asset design. Install enough instrumentation to drive better predictions.
To work well, an analytics program might identify the need for some newer instrumentation, such as replacing gas charts with EFMs, investing in new acoustic leak detection technology, or online corrosion measurement transmitters.
2. Boosting the Business Values via PyRISK™ Dynamic QRA The Frontier of Dynamic QRA and Static QRA
The oil and gas industry has been using static quantitative risk assessment (QRA) and related studies for more than 50 years to evaluate risks of major accident hazards. It is applied to demonstrate a risk to the public and employees as part of complying with regulatory requirements, which vary worldwide.
Static QRA studies typically play an important role early in the design stage of a project’s capital expenditure (CAPEX) phase; for example, when evaluating concepts, optimizing design, and establishing cost-effective risk management.
Currently, Static QRA for oil and gas assets usually provides an overview of risk at a fixed reference point in time. Such studies involve hundreds, sometimes thousands, of scenarios covering operating conditions, manning levels, and maintenance activities. The cost can be substantial, and a study may need repeating every few years. Despite this, some operators have recognized a need to utilize Dynamics QRAs as a fundamental decision-support tool for their projects and operations beyond the explicit regulatory requirements. By doing so, they gain insights into how to save costs and maintain or increase production efficiency while operating safely.
Some operators are now moving beyond static QRA models to make better use of the huge volume of data that are collected in a process involving considerable time, effort, and cost. They are starting, or planning to, update data more frequently using a dynamic approach to risk-based safety management. New cloud-based tools, combining secure data storage and analytics, are assisting them on this journey in pursuit of potential cost, operational, and safety benefits.
PyRISK™
On the Quantitative Risk Assessment project which is expected to save 60% of the work effort through the full utilization of Machine Learning.
The outcome is a faster and better decision support and simpler and easier risk communication in all phases of an asset’s life cycle. We estimate that PyRISK™ can reduce the overall cost of safety studies by up to 40 to 60% over the lifetime of an asset. Dynamic Risk Estimation & Decision Making
Kageera’s product in cooperation with Optimize Global Solutions for PyRISK™ unlocks opportunities for machine learning in risk management. It supports customers already using PyRISK™ to perform Dynamic QRA to get even more value from the data at no additional cost.
The PyRISK™ approach offers significant cost advantages. Our estimate is that an operator may end up spending 30–40% less on safety studies over the 25- to the 30-year lifecycle of a medium- to the large-sized asset, while getting much more value out of the data generated in their QRA studies.
3. PySEP ™ (MACHINE LEARNING IN OIL AND GAS)
PySEP™: An integrated and holistic approach of delivering bespoke models which includes an immense diversity of process modeling, automation & optimization, Linear Programming, data analytics and machine learning solution for wide-ranging applications for offshore processing, gas processing, LPG, refineries, and chemical plants. PySEP™ Solutions provide the following functions, but not limited to:
Field life cycle analysis and project evaluation.
Reservoir management and dynamic nodal analysis.
Flow assurance and hydro-dynamic slug management.
Refinery linear programming.
Operator training simulation.
The data-driven decision platform empowers senior management to take the best-suited decisions in a timely manner.
Avoid unplanned downtime.
4. PyFCC™
PyFCC™: Machine learning smart solution for Fluid Catalytic Cracking delivered with out-of-the-box integrated architectures that seamlessly integrate the flow of information between operating facilities and final predictive model to sort out through the field complexities. The out-of-the-box integration is giving an opportunity to:
Proactively predicts the FCC yield in response to any potential change in the refinery feedstock.
Prediction of kinetics parameters and overall conversion efficiency weighing the actual operating conditions.
The data-driven decision platform empowers senior management to take the best-suited decisions in a timely manner.
Avoid unplanned downtime.
5. Remote operation and performance shutdown using IOT (PyMACHINA™)
Machine learning, Deep learning and the Internet of Things (IoT) that could potentially revolutionize the oil and gas industries. Having already made quite a storm in various other industries including consumer electronics, this couldn’t come at a better time for the oil industry as it currently faces dramatic drops in the price of oil.
Remote operation with use of IoT (there is no need for people checking anything at platforms
Remote notifications alarms and performance shutdown
Every 20 years we have a massive explosion of platforms we can reduce this risk with deep learning algorithms and IoT.
Remote operations with the use of Deep Q learning and robotics (PyROBOT)
THE RESULT:
No accidents, no harm to the people, no damage to the environment that is the goal of our solutions in the oil and gas industry. (MACHINE LEARNING IN OIL AND GAS)
6. Deep Learning risk detecting and predictive diagnostics (PyVision™)
In an industry where oil prices are constantly rising, it has become imperative to act on operating costs, productivity and lead times in order to have better returns on investment.
There are different categories of Computer Vision such as image processing (including image recognition), facial recognition, optical character recognition. This variety means that CV can be useful for many different types of industries, and various practical use cases. Here are some concrete examples of CV applications that are currently in production:
Smart Video surveillance system to help diagnose pool fire for example.
Quality control automation of preventing fire in a refinery
Conclusion
Everything mention above entails smart and quick solutions, a big reduction in operating expenses of companies and consequently fewer overheads. This recipe of success would help the entities to survive throughout the wild competitions and reduce oil prices.
On the other hand, process and chemical engineers should start pulling up their socks and refresh their minds by recovering all basics and fundamentals of mathematics to be utterly equipped with the proper knowledge when they use ML and its application in the O&G industry. In order to have successful machine learning or deep learning solution you have to have:
Initial screening with Experienced Process Flow Assurance Engineer and Machine Learning Engineer with a proven record in the field
An agile action plan with applicable and rising risks
Digest and analyze numerous sources of data
Until next time,
Happy machine learning in Oil and Gas.
If you are interested in our Oil and Gas solutions send me an email at manja.bogicevic[et]kageera.com