Power of Machine Learning in Dynamic QRA using PyRISK™

Power of Machine Learning in Dynamic QRA using PyRISK™

We are celebrating our ongoing successful business case with PetroGulf Misr of performing Dynamic QRA for their offshore topside facilities to understand the baseline risk and additional risk with intention to make recommendations if required to reduce the risk to its tolerable limits.

One of the requirements under this project is to perform a Quantitative Risk Assessment to identify the risk. It has been highlighted to PetroGulf Misr team that the limitations of static QRA usually provide an overview of the risk at fixed reference point in time. Such studies involve hundreds, sometimes thousands of scenarios covering operating conditions, manning level and maintenance activities. The cost can be substantial, and a study may need repeating every few years.

Alternatively Optimize Global Solutions and Kageera have offered the utilization of the Dynamic QRA for this challenging project to empower the the decision making for the HSE managers and executives. Optimize Global Solutions’ product in cooperation with Kageera for PyRISK™ unlocks the opportunities of machine learning in the risk management. It supports customers already using PyRISK™ to perform Dynamic QRA to get even more value from the data at no additional cost.

Data Preparation

Given the critical importance of having the right & clean data, Optimize Global Solutions and Kageera prudently apply a holistic approach in executing their oil and gas project differently and remarkably from other competitors where the inception and culmination of the processes are within their expertise and base knowledge. The following processes are applied chronologically to ensure the compliance of the data for further machine learning activities.

1-Problem Definition and Framing out.

2-Functional Block Assessment.

3-Frequency Calculations.

4-Consequence Modeling for governing Cases.

5-Sensitivity Analysis using VB-based PyRISK™

Problem Frame-out

Image for post

Geisum North Field platform comprises four (4) decks, namely

1-Upper Deck.

2-Main Deck.

3-Machinery Deck.

4-Lower Deck

Image for post

Red blocks denotes the hazardous operation where the facilities handle hydrocarbon

Functional Block Assessment

The objective of the function blocks is to identify the global scenarios where the static and dynamic inventories for each key unit operation. Each functional block should include the operating temperature and pressure plus the fluid compositions for further consequence modeling using Phast DNV GL

Image for post

From P&IDs, pipe size, approximate length, number of flanges, manual valves, instrument connections, etc. are counted to eventually calculate the frequency of failure.

Frequency Calculations

The frequency of leak event is quite important to deeply understand the likelihood of each scenario and the frequency of occurrence of each hazard. Following the latest codes and standards in oil and gas industry, and utilizing of experience in the programming, the VBA-based application of CCE™is empowered to calculate an accurate results of frequencies for various leak sizes scenarios e.g. small leak, medium leak and large leak.

Image for post

Consequence Modeling

Quantifying the hazard intensity threshold / Severity of each potential hazard is essential for our machine learning solutions.

Image for post

Data Generation & Dynamic QRA

This step is a combination of science and art where the produced data should ensure proper distribution with intention to avoid over-fitting and under-fittings.

VBA-based application of PyRISK™ (in-house tool programmed by Optimize and kageera) empowered with thremo-dynamic data from Phast and has the ability of running various sensitivity cases . This cutting-edge tool was quite imperative to massively decrease the analysis time of nearly 65% compared to the conventional ways of working.

Image for post

Front-end Machine Learning Application

Starting from Data Exploratory through out algorithms testing and culminating with the most accurate predictive model selection is one of the outstanding strength of Kageera which enables our smart solutions to take place in the industry with full recognition from our clients.

Image for post

(1) Quantitative risk assessment (QRA) benefits an operator more than simply complying with varying regulations.

(2) Dynamically updating QRA offers cost, production, and safety benefits

(3) Dynamic QRA supports decision making and planning to improve risk management

(4) Digitization assisted by online tools enables dynamic QRA

How Machine Learning Will Change Healthcare

How Machine Learning Will Change Healthcare

First of all, AI( read Deep learning) won’t replace doctors any time soon, but AI should be the tool for doctors to even be better at their jobs and to have a better success rate in treating patience and overall patient wellbeing.

Healthcare is a data goldmine but still restricted due to different regulations in different countries (for example, HIPPA). McKinsey estimates that deep learning and machine learning in medicine could generate a value of up to $100B annually, based on better decisions, optimized innovation, improved efficiency of research trials, and new genius tool creation for doctors, patients, insurance companies, and policymakers.

What is the main problem in Healthcare, and how to make it better than the current status quo?

The Healthcare market size is USD439B; 78% Global population suffers from health or wellness issues. Market growth per year is 45%, and 76% of the world’s population travel for different treatments. Global Healthcare spends projected to reach USD8.7 trillion by 2020, which, due to corona, will be 20% more. With PyHEALTH (machine learning and deep learning implementation in Healthcare), we can reduce costs by 30% to 40% annually. We have an enormous amount of data collected daily, but a small percentage of up to 4% is used practically in the industry. The Healthcare industry is the same as it was back then with the Spanish flu in 1918. And we are now in 2020 with innovations flying to Mars and self-driving car, but we are dying from Influenza, the same back then in 1918. The only difference now is due to our better-equipment hospitals and better respirators; the virus pandemic will be shorter compare to that one in 1918. But still, we can see that in the last 100 years, we did not have innovation in Healthcare that can help us prevent or minimize different diseases. Worldwide pandemics are a severe threat. COVID-19 is just the beginning of the pandemic we will have in the future.

At Kagera, we research how machine learning and deep learning is impacting the healthcare industry as part of our PyHEALTH service.

How can machine learning solutions help us?

  • Better understanding who is most at risk,
  • Diagnose patients,
  • Develop drugs faster,
  • Finding old drugs that can help
  • Predict the spread of the disease,
  • Understand viruses better,
  • Map where viruses come from
  • Predict the next pandemic.

Machine learning is the best tool currently in the world to predict different types of risks. One example is a prediction of potential hazards in the oil and gas industry or even the nuclear energy industry.

We need to invest more in Healthcare, pharma, and biomedicine innovation with machine learning and deep learning tools on the go.

Early statistics show that essential risk factors that determine how likely an individual has some disease include:

  • Age,
  • Pre-existing conditions,
  • General hygiene habits,
  • Social habits,
  • Mental state
  • General stress scores
  • General diet and wellness
  • Number of human interactions,
  • Frequency of interactions,
  • Location and climate,
  • Socio-economic status.
Image for post

Essential data may vary depending on the potential disease. So every disease has particular data points to track.

To understand diseases and to get practical outcomes, it takes years. Even then, diagnostic is a time-consuming process. This puts pressure on doctors, as we all know that we don’t have a lot of doctors in any country worldwide.

Machine Learning and Deep Learning algorithms can help disease diagnostics cheaper and more accessible. Machine learning can learn from patterns as a doctor also do. The only difference is that machine learning algorithms don’t need to rest, and they have the same accuracy at any time of the day. The key difference here between machine learning and doctor is that experts can instantly see what the problem is and find a potential cure, but algorithms need a lot of data in order to learn. That is the key restriction because a lot of hospitals don’t share their data or even don’t collect them. The other issues are that data needs to be machine-readable.

Machine learning and deep learning can be used for detecting and minimizing different disease, such as:

  • Lung cancer or breast cancer on CT scans
  • Risk of sudden cardiac death based on electrocardiograms and cardiac MRI
  • Risk of different dental disease based on CT scans

What is the most important value that machine learning is bringing to healthcare?

Copyright United Nations Goals
Copyright United Nations Goals

Every person can have access to the same healthcare quality of top experts and for a low price. Machine learning can ensure healthy lives and well-being for all. Which is one of the main goals for the United Nations?

Personalize patient treatments

Every person is different and has less or more risk of getting different diseases. Also, we react differently to different drugs and treatments. Personalize patient treatment have enormous potential with the use of machine learning and deep learning.

Machine Learning can automate this complicated statistical work — and help discover which characteristics indicate that a patient will have a particular response to a particular treatment. So the algorithm can predict a patient’s probable response to a particular treatment.

The system learns this by cross-referencing similar patients and comparing their treatments and outcomes. The resulting outcome predictions make it much easier for doctors to design the right treatment plan. So, machine learning is a tool that helps doctors do their job even better.

What can we do with machine learning now?

Warning notifications of the potential risk of new diseases- Warning notifications can help doctors predict potential disease and prepare in time for future diseases.

We need to work more in order to develop prediction models for direct disease transmission, but knowing which data we need and working together with experts from the field is the first step to successful machine learning implementation. Key is problem discovery in the healthcare industry and then getting the data in order to resolve the problem with the use of machine learning.

Image for post

CONCLUSION

Machine Learning and deep learning are an important tool in fighting different diseases and COVID-19. We need to take this opportunity; especially, time is of the essence NOW. People’s lives are at stake. We, as a company, can use our knowledge to collect the data, pool our knowledge, make cross-functional teams with expert doctors, healthcare providers, companies working with healthcare providers in order to save many lives now and in the future.

Kagera mission and vision are to build machine learning solutions that help humans live longer and focus on things that matter the most: people, profit, planet.

If you need our urgent assistance in healthcare and COVID-19 projects send me a message at manja.bogicevic(at)kageera.com or send me a message on LINKEDIN

For more follow me on LINKEDIN.

Until next time,

Happy Machine learning

Manya PyWOMEN

P.S. I want to share 4 random stuff about me:

  1. I am one of the first self-made women Machine learning Entrepreneurs in the world.
  2. I am on the mission to become a self-made millionaire ForbesUnder30 (3 years to go).
  3. I have strong economics and business background, and in combination with my machine learning skills, it delivers invaluable guidance in making strategic business decisions.
  4. I am an ex-professional tennis player, and I have run four half-marathons.
Optimize Oil & Gas Production with Digital Twins (LENAᵀᴹ)

Optimize Oil & Gas Production with Digital Twins (LENAᵀᴹ)

We believe data, algorithms, and software should power the industry and humans, to use their creativity to shape a profitable, safe, and sustainable present & future. Today, heavy-asset industries like oil and gas, renewable industry, and energy have reached a digitalization tipping point. Increasing access to data has made data handling a key changer, even in industries that have historically been considered far from high-tech.

No alt text provided for this image

We believe machine learning is not a magic wand though it is an entirely new technology that is just at the beginning of the usage. Only 4% of companies in the whole world are in the phase – early practice. But, humans will still be the ones to drive the change.

Companies need to consider investing in the technology of digital twins (LENAᵀᴹ) that will amplify the experience and skills of their own people and assets. Digital twins technology is there to inform users about their operations and suggest measures to avoid any downtime. Once again, humans bring their own expertise to the table, supplemented by a data-driven decision and the ability to examine the data more deeply before taking any action. A creative mind is something that can not be automated and humans are essential in the artificial intelligence reality that is coming.

Online Digital Twins

As the operational life continues, the digital copy is updated automatically, in real-time, with current data, work records, and engineering information to optimize maintenance and operational activities. Using this information, engineers, managers, and operators can easily search the asset tags to access critical up-to-date engineering and work information and find the health of a particular asset. Previously, such tasks would take considerable time and effort, and would often lead to issues being missed, leading to failures or production outages. With Online LENAᵀᴹ, operational and asset issues are flagged and addressed early on, and the workflow becomes preventative, instead of reactive.

No alt text provided for this image

The reliable, real-time process data from the digital twin can be fed into simulation and analytics to optimize overall production, process conditions, and even predict failures ahead of time. A digital twin, when combined with powerful analytics and machine learning, enables predictive maintenance and optimized processes. Analytics leverage advanced pattern recognition, statistical models, mathematical models and machine learning algorithms to model an asset’s operating profile and processes and predict future performance. Appropriate, timely actions are then recommended to reduce unplanned downtime and to optimize operating conditions. With the digital twin, process simulation can also be performed to optimize the operating models based on their physical properties and thermodynamic laws.

The following three steps approach enabled by Digital Twin to optimize oil and gas production -from gathering systems to gas processing plants – is fundamental to improving performance and boosting profitability:

1. Steady State

Engineering and Design (FEED) stage, steady-state simulation models of gas processing and others, can be created to optimize the design. During operations, engineers and operators can perform engineering studies via Offline to identify design changes that will significantly increase throughput and the reliability and safety of plant operation.

No alt text provided for this image

Data analytics can be used to model fluid flow behaviors in pipeline multiphase or single-phase flow to predict pipeline holdup and potential slugging in the network. Understanding flow performance is key to optimizing the gathering network design, reducing CAPEX, and optimizing pipeline. With a unified simulation platform, the evolution from a steady-state to dynamic simulation can be achieved effortlessly.

2. Dynamic Modeling

Dynamic modeling based on ordinary differential equations and partial differential equations can be performed on these models to validate process design such as relief and flare systems, changes in feedstock, production capacity adjustment, and controls, enabling engineers to optimize the design and reduce CAPEX and OPEX. In addition, the dynamic simulation allows effective troubleshooting, control system checkouts, and comprehensive evaluations of standard and emergency operating procedures to shorten time requirements for safe plant start-up and shutdown.

No alt text provided for this image

3. Predictive Analytics to Monitor Equipment Health

Minimizing plant downtime is therefore key to improve production. This is where predictive analytics comes in. Predictive analytics enables modeling of rotating equipment performance – such as pumps, compressors, and turbines – using advanced pattern recognition and machine learning algorithms to identify and diagnose any potential operating issues, days, or weeks before failures occur. Minimizing plant downtime is therefore key to improving production. Operating models including past loading, ambient, and operational conditions are used to create a unique asset signature for each type of equipment. Real-time operating data is then compared against these models to detect any subtle deviations from expected equipment behavior, allowing reliable and effective monitoring of different types of equipment with no programming required during setup. The early-warning notification allows reliability and maintenance teams to assess, identify and resolve problems, preventing major breakdowns that can cost companies millions of dollars in production slowdowns or stoppages.

No alt text provided for this image

Brief Summary overview of Digital Twins

Digital Twins works as a back-end rigorous model and running to report key data. Such as production throughput, product purity, but it also includes extra features and tools for optimization Oil & Gas production. Digital Twins can work either offline (case studies and What-If) or online (data acquisition and optimization) based on the requirements of business values.

The digital twins enables operational excellence by helping oil and gas engineers, managers operators and owners take a model-focused approach that quickly turns massive amounts of data into business value.

These powerful data insights mean

  1. Asset failure can be predicted.
  2. Hidden revenue opportunities can be uncovered and realized.
  3. Businesses can continuously improve in the ever-changing, competitive marketplace.

In a nutshell, Digital twins is the foundation of a digital transformation that optimize production, detects equipment problems before failure occurs, uncovers new opportunities for process improvement, all while reducing unplanned downtime. Depending on the facility itself you can combine data analytics, machine learning, Big Data, and software applications in order to utilize the power of the data and turn it into business value for every Oil and Gas company.

Until next time,

Manja Bogicevic

The NBA Data Revolution: How Machine Learning in the NBA is changing the Game

The NBA Data Revolution: How Machine Learning in the NBA is changing the Game

Over the last ten years, data scientist have chewed up professional baseball and spit out an almost entirely new game. However, also, basketball is the new game that data science and machine learning is changing completely.

The NBA has used statistics, that may even surpass that of Major League Baseball. We know they let data into the locker room first. We all saw Moneyball movie with Brad Pitt or even read the book. Almost every team in the NBA now has a data scientist on their team who work with coaches. The job is to help to scan players to maximize talents and identify undervalued players. Many players use wearables and IoT sleep monitors to track their medical status to avoid injury. “The NBA’s best team, the Golden State Warriors, depend on their analytics success. The league even run annual Hackathon to uncover new data analyst talent “, as said in Quartz.

“ Analytics are part and parcel of virtually everything we do now,” NBA commissioner Adam Silver.

  1. Three-point Analysis strategy

The most significant change that happened to the NBA caused by analytics, the rise of the three-point shot, as a result of simple math. In 2012, the average team took only 18.4 three-point shots per game, but In 2018 there was a 70% increase. The increased use of the three-pointer was mainly a result of the analysis. A three-pointer that has only a 35% chance of going in still led to more points in comparison to a two-point jump shot that is closer to the basket. So, coaches now encourage players with strong three-point shooting skills to shot as often as possible. We can mention, for example, Kevin Durant, Klay Thompson.

Image for post

Defense analysis strategy

The more sophisticated analysis led to the other enormous changes in basketball. Teams are now much stronger when we talk about evaluating defense. Data Scientist is capable with granular tracking data, to see which players are best at controlling the most efficient three-pointers and dunks shoots. When Data Scientist use Bayesian networks, he can discover how much better team’s overall defense is when a particular player is in the game.

As a result, we have the near-extinction of certain types of players. Basketball players who take a lot of inefficient two-point shots and don’t grade out as staunch defenders are not valuable any more for teams in NBA. We can also see why player Teodosic, originally from Serbia, didn’t have much success in the NBA. All players now need to be good teammates. They need to pass the ball to another player on the court that has an opportunity for a success point.

Now before evaluating the player, the coaches look down last year statistics for a particular player with Data Scientist. Now we see that different kinds of basketball players are most valuable, comparing to the past. The NBA data revolution has also changed how much “valuable” players spent their time playing. When data show that the player is at risk of injury, he will get days off. So, hard work is the past. We are now witnesses of the smart work revolution. Silver has said that teams are even testing saliva for signs of fatigue.

The resting strategy of star players has negativity outcome for some fans who paid for tickets at the start of the season. Why? They are stuck seeing a lineup of backups play. However, on another hand, this rest strategy will help players, because it prolongs careers. In other words, fans and league will be more satisfied in the long run. In conclusion, it will make overall gameplay better too.

Image for post

The NBA’s data revolution is creating rosters of more skilled, more well-rounded players, and who better rested when they do play.

The only thing that puts Golden State Warriors apart as the superstar NBA champions, even among the champions of the past — is DATA.

The 2018 NBA playoffs were not actually so competitive. The Warriors dominated their way to another NBA championship. However, the LeBron James-led Cleveland Cavaliers — could not give much of the fight.

The anti-climactic playoffs sparked much discussion among NBA players and fans about who is at fault for the rise of the “superstar.” team. The Warriors, based in San Francisco Bay Area, seem to stay as champion in the world’s top professional basketball league for the next half-decade. At this point they do not have data competition. If the Warriors gained an unfair advantage, you can’t blame Kevin Durant for it.

The real advantage they have amongst other NBA teams — even other past championship squads — is an excellent eye for new talent. They use data to see who is the best fit and who has the possibilities to grow during the season. They have been super successful in the NBA’s amateur draft — the annual event in which teams pick the next generation of players. The average share of draftees’ minutes played in the playoffs’ final round was 38.6%.

Image for post

However, simply highlighting the share of minutes played by draftees do not justice how well the Warriors have identified talent in the draft. Funny story is of the Golden State players who have played the most minutes over that period were among the top five picks in the draft.

Warriors’ sharpshooters Steph Curry and Klay Thompson were picked 7th in 2009 and 11th in 2011 respectively. These players have proved to better than nearly all of the players pick before them.

But, just drafting the right players is not the only reason for the Warriors’ enormous success. The team also has data innovative management and a robust team development infrastructure to help Curry, Thompson, and others reach peak levels. They have also been lucky. Moreover, they were only able to get Durant because of an unusual one-year jump in the amount teams could spend on players.

Still, excellent drafting is the foundation of their greatness. They identified and nurtured talent that others didn’t see.

Thank you for reading,

Yours Manja

If you are interested in basketball analytics you can drop me a message on Linkedin or Instagram.

Or send me email at manja.bogicevic[at]kageera.com

Everything A CEO Needs To Know About Machine Learning

Everything A CEO Needs To Know About Machine Learning

In this short article, you will learn everything a CEO needs to know about machine learning. How Machine Learning works. After that what you can do with it. In conclusion how to get started.

It is all about connecting A to B

What a CEO needs to know about machine learning? How Machine Learning works. After that what you can do with it. In conclusion how to get started. Therefore, information on ML is often confusing and sometimes downright misleading (I’m looking at you, Microsoft). But ML for business is simple:

“99% of the economic value created by AI today is through one type of AI, which is learning A to B or input to output mappings.” Andrew Ng

Almost every business application of ML today is about learning to produce certain outputs from certain inputs:

How does ML learn to predict the correct output?

  1. You collect examples of input –> output pairs (the more the better).
  2. The ML algorithm learns the connection between input and output (that’s the magic).
  3. You apply the trained algorithm (the “model”) to new input data to predict the correct output.

In other words, almost everyone who uses ML to make $$ does it exactly like this.

Use ML when you have a lot of data

ML needs data

Therefore, ML is powerful because it turns data into insights. But it is less efficient at learning than people are (yes, way less efficient), so it needs a lot of data in order to learn. If you have lots of data, you should think about it!

Data is a competitive advantage, not Algorithms.

However, that’s why Google and Facebook have no problem open sourcing their algorithms. But they definitely don’t open source their data. If you have a lot of data no one else has, then that’s the perfect opportunity to build a unique ML system.

The 3 simplest ways to find your ML use cases

For instance, you have a lot of data. Now, what do you do? Here are the 3 best ways I know to discover ML use cases:

1. Improve automated decision-making

If you are wondering what the CEO needs to know about machine learning. Where do you have software that automates rule-based decisions?

For example:

  • Call Routing
  • Credit Scoring
  • Image Classification
  • Marketing Segmentation
  • Product Classification
  • Document screening

There is a good chance ML can improve the accuracy of these decisions because ML models can capture more of the underlying complexity that connects A to B. By comparison when you write rules into software manually (the traditional way), you can only encode rudimentary dependencies.

Machine learning for CEO 1

2. Things people can do in < 1- second

Another great heuristic I first heard from Andrew Ng is:

“Pretty much anything that a normal person can do in <1 sec, we can now automate with AI.” Twitter

So what are some things humans can decide in < 1 sec?

  • Who’s in that picture?
  • Do I have a good feeling about this potential customer?
  • Does this look like a problematic CT Scan?

Many jobs are a sequence of < 1-sec decisions. Like driving:

  • Is that person going to cross the street?
  • Am I too close to the sidewalk?
  • Should I slow down?
  • … and many, many more.

Anything you can do in less than 1 second, ML can most likely do too (or it will be able to soon).

3. Get inspired by Kaggle competitions

Large corporations like ZillowAvitoHome DepotSantanderAllstate, and Expedia are running data science competitions on Kaggle. These are challenges they want outside data scientists to solve. So these competitions give you an idea of what types of AI solutions they are working on. It’s really a great resource.

Machine learning for CEO 2

Have a look at the competitions and get inspired.

Finding ML Use Cases:

  • Upgrade decision-making that’s already automated
  • Automate things people do in < 1 sec
  • Get inspired by Kaggle competitions

Don’t wait until you have a Data Science Team

Machine learning for CEO 3

Building a good data science team is super hard (and expensive!)

Many companies struggle (and ultimately fail) to build an efficient data science team. Why is it so hard?

  • Misinformation about who to hire
  • Tough competition for talent
  • Few managers who can lead data science teams effectively

In addition…

You don’t know yet whether you need a data science team

You might have a lot of data and a lot of ideas, but that doesn’t mean you need your own data science team. Only after you’ve built your first AI systems will you really know how much manpower you’ll need in the long run?

Above all build something that works — fast

Your first goal should be to pick the lowest hanging AI fruit and finish it quickly. This gets you miles ahead:

  • You’ll achieve the tangible success that gets investors, the board, and your team excited;
  • Get to know the taste of ML and get real-life experience of what works and what doesn’t;
  • Have better ideas about where to use AI next.

With that first finished system under your belt, you are in a much better position to hire and train a data science team. Or maybe you find that you don’t want (or need) to hire a whole team because that low-hanging fruit already accounted for 80% of what you can feasibly do with AI.

ML teams should work across the whole company

However, If you do build an AI team, you should build it for the whole company, not just one department. A horizontal AI team.

In other words, don’t think in departments

AI experience is very transferable: To a data scientist, your CRM data looks almost the same as your inventory data. And to an algorithm, they look even more similar. It makes a lot of sense to build one AI task force for the whole firm.

Side note: For that to work, you should also have your data in one place!

More life-saving tips

Machine learning for CEO 4

In other words, don’t listen to people selling a “better algorithm”

Either they have no experience, or they’re trying to sell you open source for a markup. In AI especially, everyone is cooking with water, meaning they’re using publically available algorithms.

Above all focus on business experience & engineering quality

Also, Work with someone who takes the time to really understand the business problem you’re trying to solve and has high standards when it comes to engineering quality. However, If you want lots of results with fewer headaches, then the plumbing is more important than the cute algorithm that flows through it.

Magic Triangle meetings 😋

In other words, the best ideas develop when you put three types of people in a room:

  1. A person who’s in touch with the current business priorities (you?),
  2. A person knows the data you have (your database engineer), and
  3. Someone who has lots of practical experience building ML systems.

For instance, together, these three people can make realistic plans, really fast.

In conclusion:

  • AI is about connecting A to B.
  • Look into processes that involve a lot of data and < 1-sec decisions.
  • Get the first win before you plan big.

In conclusion, you learned everything a CEO Needs To Know About Machine Learning. In other words, If you need help along the way, drop me a line. I’m always happy to hear about exciting challenges: manja.bogicevic at kageera.ai