The future of Law firms and the Legal sector: 4 AI trends in the Law profession

According to Deloitte, 100,000 legal roles will be automated by 2036. They report that by 2020 law firms will be faced with a “tipping point” for a new talent strategy. Now is the time for all law firms to commit to becoming AI-ready by embracing a growth mindset, set aside the fear of failure and begin to develop internal AI practices. There are many who believe innovation is the key to transforming the legal profession. That’s precisely what we PyperAI“the first legal technology venture created by a law firm,” plans to do. When professional sector faces new technology, questions arise regarding how technology will disrupt daily operations and careers. Lawyers and the legal profession are no exception.

“Can machines think?” Let’s expand this question asked by Alan Turing in the 50s. The countless disaster scenarios, in which artificial intelligence (AI) takes over the world and destroys humanity, are already made-up and still being told in Hollywood.

AI has not yet taken control of humanity, but it has indeed taken control of many aspects of our lives even if we do not perceive it as such. We accept AI as a part of our lives. The simplest example is our smartphones! Let’s dig deeper.

The role of Deep Learning

Over the past 7 years, the sub-area of AI is deep learning. Deep learning is more successful than humans especially in processing visual data and analyzing images from the images, what objects or living things exist, relationships with each other, event estimation, object/person tracking, etc.

Deep learning includes AI models that generate the most successful results in the application areas of recent years, based on artificial neural networks and requiring a lot of processing power.

How do NL Systems Learn Language?

Models used for natural language processing are also within the scope of deep learning. Using natural language processing models, we can parse millions of data files loaded into the computer by class. In this process, the system learns the relationship between words from all the documents and is able to predict that the word ‘carrot’ comes after the word ‘rabbit’ with higher probability than the word ‘sun’. AI can estimate this due to the fact that the words perform meaning analysis based on their statistical status in sentences. It is possible to summarize or classify a long paragraph, including time-space information from the single sentences.

Leibniz: The First Lawyer to Predict the Use of Machines in Law

Leibniz, who is one of the grandfathers of AI, was a lawyer and said: ‘It is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used.’

In 1673, he presented the machine for four arithmetic operations in the UK. Leibniz says ‘The only way to correct our reasoning is to make them as tangible as the mathematicians’ so that we can find our error at a glance, and when there are disagreements between people, let’s calculate and see who is right!’So, let’s think, why shouldn’t it be possible for machines to complete all steps of the event chain which occurs in a lawyer’s mind while they are deciding?

Why couldn’t the machine do it? Why can it not calculate who is right in the dispute between people or how to find the middle way? Isn’t that a ‘robot mediator’? These questions belong to the 17th century! I would like to point out, and we are at the MIDof 2019!

AI vs Lawyers

In June 2018, AINOW — a research institute examining the social implications of AI — convened a workshop with the goal of bringing together legal, scientific, and technical advocates who focus on litigating algorithmic decision-making across various areas of the law (e.g., employment, public benefits, criminal justice).

They structured the day with the practical aim of discussing strategy and best practices while also exchanging ideas and experiences in litigation and other advocacy in this space. The gathering included several of the lawyers who brought the cases alongside advocates, researchers, technical experts, social scientists, and other leading thinkers in the area of algorithmic accountability.

How will AI impact the legal profession?

Manja says look at these 4 AI trends for the legal profession:

1. Review documents and legal research

AI-powered software improves the efficiency of document analysis for legal use and machines can review documents and flag them as relevant to a particular case. Once a certain type of document is denoted as relevant, machine learning algorithms can get to work to find other documents that are similarly relevant. Machines are much faster at sorting through documents than humans and can produce output and results that can be statistically validated. They can help reduce the load on the human workforce by forwarding on only documents that are questionable rather than requiring humans to review all documents. It’s important that legal research is done in a timely and comprehensive manner, even though it’s monotonous. AI systems such as the one that we are developing PyperAI leverages natural language processing to help analyze documents.

2. Better perform due diligence

In law offices around the world, legal support professionals are kept busy conducting due diligence to uncover background information on behalf of their clients. This works includes confirming facts and figures and thoroughly evaluating the decisions on prior cases to effectively provide counsel to their clients. Artificial intelligence tools can help these legal support professionals to conduct their due diligence more efficiently and with more accuracy.

3. Contract review

A big portion of work law firms do on behalf of clients is to review contracts to identify risks and issues with how contracts are written that could have negative impacts on their clients. They redline items, edit contracts, and counsel clients if they should sign or not or help them negotiate better terms. AI can help analyze contracts in bulk as well as individual contracts.

4. Predict legal outcomes

AI has the capability of analyzing data to help it make predictions about the outcomes of legal proceedings better than humans. Clients are often asking their legal counsel to predict the future with questions such as “If we go to trial, how likely will it be that I win?” or “Should I settle?” With the use of AI lawyers are able to better answer such questions.

Until next time,

Manja

P.s.

If you are interested in what we are developing for Law Firms and Legal profession contact me on Linkedin or Instagram or schedule a call with me here

P.s.s

1. My mission is to become #NextForbesUnder30

2. I am one of the first Women Machine Learning Entrepreneurs in Serbia

3. I have run 4 half-marathons in Belgrade

4. We are developing PyperAI to help lawyers reduce time and risk and focus on making more deals

5. If you need help on your ML or AI project, contact me or my team

Machine Learning and Deep Learning for Fire Detection

By definition, a fire is a process in which substances combine chemically with oxygen from the air and typically give out bright light, heat, and smoke; combustion or burning. In the oil and gas industry, fire hazards are a big issue which is faced on a regular basis. In order for companies to understand how to reduce these fires as well as damage and injuries, it is important to know the difference between the different types of fires that can occur. This article is about understanding the types of fires, more specifically comparing the differences between a jet fire, pool fire, and flash fire. As well, we will look into how these fires can be prevented with the use of machine learning. In order to understand the differences, we first need to see what causes these fires, and what variables are the same throughout all three, and which create a change between them.

Jet fires are caused by high pressure releases of hydrocarbon, causing flames to shoot out in one direction, similarly as a flamethrower. In order for this fire to occur, there is also a need for oxygen which allows the fire to breathe. However, in order for the fire to start, there needs to be a source of ignition. There are many potential sources of ignition, such as a spark, heat from hot surfaces, or even the reflection from mobile or tablet devices. Another source could also be cigarettes, which is why they are forbidden even in gas stations. The image above represents a case of a jet fire, which we can identify for certain by the way the fire is shooting in one direction.

Pool fires on the other hand are the result of liquid hydrocarbon which also interacts with oxygen and a source of ignition. Since in this case the hydrocarbon is liquid, the fire is spread out along the liquid, rather than shooting out from a highly pressurized point, which can be seen in the image above, as they are not high in pressure and a lot more spread out. This type of fire can be caused when liquid hydrocarbon interacts with air, as well as any one source of ignition as mentioned previously. This in turn starts a flame that then spreads out throughout the liquid, as shown in the image.

Finally, a flash fire is caused when gas hydrocarbon slowly seeps out, until it catches on fire from a source of ignition, weather it is from a spark or high temperatures, it goes through the air until it consumes all the gas in the air. The image above represents a scenario like this, where the fire is fully spread out in the air, consuming the gasses all around.

Having an understanding of how the different fires happen allows them to optimize work, to fix the roots of the problems before they are able to happen. For jet fires, it is important that highly pressurized gases aren’t able to interact with oxygen as well as having a source of ignition. To prevent pool fires pipe leaks of liquid hydrocarbon must be sealed off in order for them not to catch on fire. As for flash fires, hydrocarbon gas shouldn’t be able to seep out from pipes as it can in an instant catch on fire, causing safety risks for workers. These steps are simple in theory, but in practice a lot harder considering how big the working facilities are, and how many pipes need to be maintained.

Luckily, as technology progresses rapidly, we have more equipment at our disposal, such as sensors and IoT devices. However, it is still hard to process the data which is given from them in order to prevent potential fires, which is why deep learning is used in order to create a predictive model that helps companies predict disasters ahead of time, allowing them to be proactive rather than reactive. This is done by converting information based on past results as well as real-time results, creating simple insights that allow users to detect defects which would have lead to a malfunction in the system.

Conclusion:

The challenge with this lies in the fact that the oil and gas industry is structured towards avoiding failure, meaning that the needed examples of failure patters can be hard to find, but not impossible. Still, what deep learning brings is an improved approach if done correctly that can prevent many catastrophic events both for the environment and for workers, and over time these methods will further improve, removing a lot more risks from the industry that is heavily connected to our everyday lives.

Power of Machine Learning in Dynamic QRA using PyRISK™

Power of Machine Learning in Dynamic QRA using PyRISK™

We are celebrating our ongoing successful business case with PetroGulf Misr of performing Dynamic QRA for their offshore topside facilities to understand the baseline risk and additional risk with intention to make recommendations if required to reduce the risk to its tolerable limits.

One of the requirements under this project is to perform a Quantitative Risk Assessment to identify the risk. It has been highlighted to PetroGulf Misr team that the limitations of static QRA usually provide an overview of the risk at fixed reference point in time. Such studies involve hundreds, sometimes thousands of scenarios covering operating conditions, manning level and maintenance activities. The cost can be substantial, and a study may need repeating every few years.

Alternatively Optimize Global Solutions and Kageera have offered the utilization of the Dynamic QRA for this challenging project to empower the the decision making for the HSE managers and executives. Optimize Global Solutions’ product in cooperation with Kageera for PyRISK™ unlocks the opportunities of machine learning in the risk management. It supports customers already using PyRISK™ to perform Dynamic QRA to get even more value from the data at no additional cost.

Data Preparation

Given the critical importance of having the right & clean data, Optimize Global Solutions and Kageera prudently apply a holistic approach in executing their oil and gas project differently and remarkably from other competitors where the inception and culmination of the processes are within their expertise and base knowledge. The following processes are applied chronologically to ensure the compliance of the data for further machine learning activities.

1-Problem Definition and Framing out.

2-Functional Block Assessment.

3-Frequency Calculations.

4-Consequence Modeling for governing Cases.

5-Sensitivity Analysis using VB-based PyRISK™

Problem Frame-out

Image for post

Geisum North Field platform comprises four (4) decks, namely

1-Upper Deck.

2-Main Deck.

3-Machinery Deck.

4-Lower Deck

Image for post

Red blocks denotes the hazardous operation where the facilities handle hydrocarbon

Functional Block Assessment

The objective of the function blocks is to identify the global scenarios where the static and dynamic inventories for each key unit operation. Each functional block should include the operating temperature and pressure plus the fluid compositions for further consequence modeling using Phast DNV GL

Image for post

From P&IDs, pipe size, approximate length, number of flanges, manual valves, instrument connections, etc. are counted to eventually calculate the frequency of failure.

Frequency Calculations

The frequency of leak event is quite important to deeply understand the likelihood of each scenario and the frequency of occurrence of each hazard. Following the latest codes and standards in oil and gas industry, and utilizing of experience in the programming, the VBA-based application of CCE™is empowered to calculate an accurate results of frequencies for various leak sizes scenarios e.g. small leak, medium leak and large leak.

Image for post

Consequence Modeling

Quantifying the hazard intensity threshold / Severity of each potential hazard is essential for our machine learning solutions.

Image for post

Data Generation & Dynamic QRA

This step is a combination of science and art where the produced data should ensure proper distribution with intention to avoid over-fitting and under-fittings.

VBA-based application of PyRISK™ (in-house tool programmed by Optimize and kageera) empowered with thremo-dynamic data from Phast and has the ability of running various sensitivity cases . This cutting-edge tool was quite imperative to massively decrease the analysis time of nearly 65% compared to the conventional ways of working.

Image for post

Front-end Machine Learning Application

Starting from Data Exploratory through out algorithms testing and culminating with the most accurate predictive model selection is one of the outstanding strength of Kageera which enables our smart solutions to take place in the industry with full recognition from our clients.

Image for post

(1) Quantitative risk assessment (QRA) benefits an operator more than simply complying with varying regulations.

(2) Dynamically updating QRA offers cost, production, and safety benefits

(3) Dynamic QRA supports decision making and planning to improve risk management

(4) Digitization assisted by online tools enables dynamic QRA

How Machine Learning Will Change Healthcare

First of all, AI( read Deep learning) won’t replace doctors any time soon, but AI should be the tool for doctors to even be better at their jobs and to have a better success rate in treating patience and overall patient wellbeing.

Healthcare is a data goldmine but still restricted due to different regulations in different countries (for example, HIPPA). McKinsey estimates that deep learning and machine learning in medicine could generate a value of up to $100B annually, based on better decisions, optimized innovation, improved efficiency of research trials, and new genius tool creation for doctors, patients, insurance companies, and policymakers.

What is the main problem in Healthcare, and how to make it better than the current status quo?

The Healthcare market size is USD439B; 78% Global population suffers from health or wellness issues. Market growth per year is 45%, and 76% of the world’s population travel for different treatments. Global Healthcare spends projected to reach USD8.7 trillion by 2020, which, due to corona, will be 20% more. With PyHEALTH (machine learning and deep learning implementation in Healthcare), we can reduce costs by 30% to 40% annually. We have an enormous amount of data collected daily, but a small percentage of up to 4% is used practically in the industry. The Healthcare industry is the same as it was back then with the Spanish flu in 1918. And we are now in 2020 with innovations flying to Mars and self-driving car, but we are dying from Influenza, the same back then in 1918. The only difference now is due to our better-equipment hospitals and better respirators; the virus pandemic will be shorter compare to that one in 1918. But still, we can see that in the last 100 years, we did not have innovation in Healthcare that can help us prevent or minimize different diseases. Worldwide pandemics are a severe threat. COVID-19 is just the beginning of the pandemic we will have in the future.

At Kagera, we research how machine learning and deep learning is impacting the healthcare industry as part of our PyHEALTH service.

How can machine learning solutions help us?

  • Better understanding who is most at risk,
  • Diagnose patients,
  • Develop drugs faster,
  • Finding old drugs that can help
  • Predict the spread of the disease,
  • Understand viruses better,
  • Map where viruses come from
  • Predict the next pandemic.

Machine learning is the best tool currently in the world to predict different types of risks. One example is a prediction of potential hazards in the oil and gas industry or even the nuclear energy industry.

We need to invest more in Healthcare, pharma, and biomedicine innovation with machine learning and deep learning tools on the go.

Early statistics show that essential risk factors that determine how likely an individual has some disease include:

  • Age,
  • Pre-existing conditions,
  • General hygiene habits,
  • Social habits,
  • Mental state
  • General stress scores
  • General diet and wellness
  • Number of human interactions,
  • Frequency of interactions,
  • Location and climate,
  • Socio-economic status.
Image for post

Essential data may vary depending on the potential disease. So every disease has particular data points to track.

To understand diseases and to get practical outcomes, it takes years. Even then, diagnostic is a time-consuming process. This puts pressure on doctors, as we all know that we don’t have a lot of doctors in any country worldwide.

Machine Learning and Deep Learning algorithms can help disease diagnostics cheaper and more accessible. Machine learning can learn from patterns as a doctor also do. The only difference is that machine learning algorithms don’t need to rest, and they have the same accuracy at any time of the day. The key difference here between machine learning and doctor is that experts can instantly see what the problem is and find a potential cure, but algorithms need a lot of data in order to learn. That is the key restriction because a lot of hospitals don’t share their data or even don’t collect them. The other issues are that data needs to be machine-readable.

Machine learning and deep learning can be used for detecting and minimizing different disease, such as:

  • Lung cancer or breast cancer on CT scans
  • Risk of sudden cardiac death based on electrocardiograms and cardiac MRI
  • Risk of different dental disease based on CT scans

What is the most important value that machine learning is bringing to healthcare?

Copyright United Nations Goals
Copyright United Nations Goals

Every person can have access to the same healthcare quality of top experts and for a low price. Machine learning can ensure healthy lives and well-being for all. Which is one of the main goals for the United Nations?

Personalize patient treatments

Every person is different and has less or more risk of getting different diseases. Also, we react differently to different drugs and treatments. Personalize patient treatment have enormous potential with the use of machine learning and deep learning.

Machine Learning can automate this complicated statistical work — and help discover which characteristics indicate that a patient will have a particular response to a particular treatment. So the algorithm can predict a patient’s probable response to a particular treatment.

The system learns this by cross-referencing similar patients and comparing their treatments and outcomes. The resulting outcome predictions make it much easier for doctors to design the right treatment plan. So, machine learning is a tool that helps doctors do their job even better.

What can we do with machine learning now?

Warning notifications of the potential risk of new diseases- Warning notifications can help doctors predict potential disease and prepare in time for future diseases.

We need to work more in order to develop prediction models for direct disease transmission, but knowing which data we need and working together with experts from the field is the first step to successful machine learning implementation. Key is problem discovery in the healthcare industry and then getting the data in order to resolve the problem with the use of machine learning.

Image for post

CONCLUSION

Machine Learning and deep learning are an important tool in fighting different diseases and COVID-19. We need to take this opportunity; especially, time is of the essence NOW. People’s lives are at stake. We, as a company, can use our knowledge to collect the data, pool our knowledge, make cross-functional teams with expert doctors, healthcare providers, companies working with healthcare providers in order to save many lives now and in the future.

Kagera mission and vision are to build machine learning solutions that help humans live longer and focus on things that matter the most: people, profit, planet.

If you need our urgent assistance in healthcare and COVID-19 projects send me a message at manja.bogicevic(at)kageera.com or send me a message on LINKEDIN

For more follow me on LINKEDIN.

Until next time,

Happy Machine learning

Manya PyWOMEN

P.S. I want to share 4 random stuff about me:

  1. I am one of the first self-made women Machine learning Entrepreneurs in the world.
  2. I am on the mission to become a self-made millionaire ForbesUnder30 (3 years to go).
  3. I have strong economics and business background, and in combination with my machine learning skills, it delivers invaluable guidance in making strategic business decisions.
  4. I am an ex-professional tennis player, and I have run four half-marathons.

The NBA Data Revolution: How Machine Learning in the NBA is changing the Game

Over the last ten years, data scientist have chewed up professional baseball and spit out an almost entirely new game. However, also, basketball is the new game that data science and machine learning is changing completely.

The NBA has used statistics, that may even surpass that of Major League Baseball. We know they let data into the locker room first. We all saw Moneyball movie with Brad Pitt or even read the book. Almost every team in the NBA now has a data scientist on their team who work with coaches. The job is to help to scan players to maximize talents and identify undervalued players. Many players use wearables and IoT sleep monitors to track their medical status to avoid injury. “The NBA’s best team, the Golden State Warriors, depend on their analytics success. The league even run annual Hackathon to uncover new data analyst talent “, as said in Quartz.

“ Analytics are part and parcel of virtually everything we do now,” NBA commissioner Adam Silver.

  1. Three-point Analysis strategy

The most significant change that happened to the NBA caused by analytics, the rise of the three-point shot, as a result of simple math. In 2012, the average team took only 18.4 three-point shots per game, but In 2018 there was a 70% increase. The increased use of the three-pointer was mainly a result of the analysis. A three-pointer that has only a 35% chance of going in still led to more points in comparison to a two-point jump shot that is closer to the basket. So, coaches now encourage players with strong three-point shooting skills to shot as often as possible. We can mention, for example, Kevin Durant, Klay Thompson.

Image for post

Defense analysis strategy

The more sophisticated analysis led to the other enormous changes in basketball. Teams are now much stronger when we talk about evaluating defense. Data Scientist is capable with granular tracking data, to see which players are best at controlling the most efficient three-pointers and dunks shoots. When Data Scientist use Bayesian networks, he can discover how much better team’s overall defense is when a particular player is in the game.

As a result, we have the near-extinction of certain types of players. Basketball players who take a lot of inefficient two-point shots and don’t grade out as staunch defenders are not valuable any more for teams in NBA. We can also see why player Teodosic, originally from Serbia, didn’t have much success in the NBA. All players now need to be good teammates. They need to pass the ball to another player on the court that has an opportunity for a success point.

Now before evaluating the player, the coaches look down last year statistics for a particular player with Data Scientist. Now we see that different kinds of basketball players are most valuable, comparing to the past. The NBA data revolution has also changed how much “valuable” players spent their time playing. When data show that the player is at risk of injury, he will get days off. So, hard work is the past. We are now witnesses of the smart work revolution. Silver has said that teams are even testing saliva for signs of fatigue.

The resting strategy of star players has negativity outcome for some fans who paid for tickets at the start of the season. Why? They are stuck seeing a lineup of backups play. However, on another hand, this rest strategy will help players, because it prolongs careers. In other words, fans and league will be more satisfied in the long run. In conclusion, it will make overall gameplay better too.

Image for post

The NBA’s data revolution is creating rosters of more skilled, more well-rounded players, and who better rested when they do play.

The only thing that puts Golden State Warriors apart as the superstar NBA champions, even among the champions of the past — is DATA.

The 2018 NBA playoffs were not actually so competitive. The Warriors dominated their way to another NBA championship. However, the LeBron James-led Cleveland Cavaliers — could not give much of the fight.

The anti-climactic playoffs sparked much discussion among NBA players and fans about who is at fault for the rise of the “superstar.” team. The Warriors, based in San Francisco Bay Area, seem to stay as champion in the world’s top professional basketball league for the next half-decade. At this point they do not have data competition. If the Warriors gained an unfair advantage, you can’t blame Kevin Durant for it.

The real advantage they have amongst other NBA teams — even other past championship squads — is an excellent eye for new talent. They use data to see who is the best fit and who has the possibilities to grow during the season. They have been super successful in the NBA’s amateur draft — the annual event in which teams pick the next generation of players. The average share of draftees’ minutes played in the playoffs’ final round was 38.6%.

Image for post

However, simply highlighting the share of minutes played by draftees do not justice how well the Warriors have identified talent in the draft. Funny story is of the Golden State players who have played the most minutes over that period were among the top five picks in the draft.

Warriors’ sharpshooters Steph Curry and Klay Thompson were picked 7th in 2009 and 11th in 2011 respectively. These players have proved to better than nearly all of the players pick before them.

But, just drafting the right players is not the only reason for the Warriors’ enormous success. The team also has data innovative management and a robust team development infrastructure to help Curry, Thompson, and others reach peak levels. They have also been lucky. Moreover, they were only able to get Durant because of an unusual one-year jump in the amount teams could spend on players.

Still, excellent drafting is the foundation of their greatness. They identified and nurtured talent that others didn’t see.

Thank you for reading,

Yours Manja

If you are interested in basketball analytics you can drop me a message on Linkedin or Instagram.

Or send me email at manja.bogicevic[at]kageera.com