The future of Law firms and the Legal sector: 4 AI trends in the Law profession

According to Deloitte, 100,000 legal roles will be automated by 2036. They report that by 2020 law firms will be faced with a “tipping point” for a new talent strategy. Now is the time for all law firms to commit to becoming AI-ready by embracing a growth mindset, set aside the fear of failure and begin to develop internal AI practices. There are many who believe innovation is the key to transforming the legal profession. That’s precisely what we PyperAI“the first legal technology venture created by a law firm,” plans to do. When professional sector faces new technology, questions arise regarding how technology will disrupt daily operations and careers. Lawyers and the legal profession are no exception.

“Can machines think?” Let’s expand this question asked by Alan Turing in the 50s. The countless disaster scenarios, in which artificial intelligence (AI) takes over the world and destroys humanity, are already made-up and still being told in Hollywood.

AI has not yet taken control of humanity, but it has indeed taken control of many aspects of our lives even if we do not perceive it as such. We accept AI as a part of our lives. The simplest example is our smartphones! Let’s dig deeper.

The role of Deep Learning

Over the past 7 years, the sub-area of AI is deep learning. Deep learning is more successful than humans especially in processing visual data and analyzing images from the images, what objects or living things exist, relationships with each other, event estimation, object/person tracking, etc.

Deep learning includes AI models that generate the most successful results in the application areas of recent years, based on artificial neural networks and requiring a lot of processing power.

How do NL Systems Learn Language?

Models used for natural language processing are also within the scope of deep learning. Using natural language processing models, we can parse millions of data files loaded into the computer by class. In this process, the system learns the relationship between words from all the documents and is able to predict that the word ‘carrot’ comes after the word ‘rabbit’ with higher probability than the word ‘sun’. AI can estimate this due to the fact that the words perform meaning analysis based on their statistical status in sentences. It is possible to summarize or classify a long paragraph, including time-space information from the single sentences.

Leibniz: The First Lawyer to Predict the Use of Machines in Law

Leibniz, who is one of the grandfathers of AI, was a lawyer and said: ‘It is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used.’

In 1673, he presented the machine for four arithmetic operations in the UK. Leibniz says ‘The only way to correct our reasoning is to make them as tangible as the mathematicians’ so that we can find our error at a glance, and when there are disagreements between people, let’s calculate and see who is right!’So, let’s think, why shouldn’t it be possible for machines to complete all steps of the event chain which occurs in a lawyer’s mind while they are deciding?

Why couldn’t the machine do it? Why can it not calculate who is right in the dispute between people or how to find the middle way? Isn’t that a ‘robot mediator’? These questions belong to the 17th century! I would like to point out, and we are at the MIDof 2019!

AI vs Lawyers

In June 2018, AINOW — a research institute examining the social implications of AI — convened a workshop with the goal of bringing together legal, scientific, and technical advocates who focus on litigating algorithmic decision-making across various areas of the law (e.g., employment, public benefits, criminal justice).

They structured the day with the practical aim of discussing strategy and best practices while also exchanging ideas and experiences in litigation and other advocacy in this space. The gathering included several of the lawyers who brought the cases alongside advocates, researchers, technical experts, social scientists, and other leading thinkers in the area of algorithmic accountability.

How will AI impact the legal profession?

Manja says look at these 4 AI trends for the legal profession:

1. Review documents and legal research

AI-powered software improves the efficiency of document analysis for legal use and machines can review documents and flag them as relevant to a particular case. Once a certain type of document is denoted as relevant, machine learning algorithms can get to work to find other documents that are similarly relevant. Machines are much faster at sorting through documents than humans and can produce output and results that can be statistically validated. They can help reduce the load on the human workforce by forwarding on only documents that are questionable rather than requiring humans to review all documents. It’s important that legal research is done in a timely and comprehensive manner, even though it’s monotonous. AI systems such as the one that we are developing PyperAI leverages natural language processing to help analyze documents.

2. Better perform due diligence

In law offices around the world, legal support professionals are kept busy conducting due diligence to uncover background information on behalf of their clients. This works includes confirming facts and figures and thoroughly evaluating the decisions on prior cases to effectively provide counsel to their clients. Artificial intelligence tools can help these legal support professionals to conduct their due diligence more efficiently and with more accuracy.

3. Contract review

A big portion of work law firms do on behalf of clients is to review contracts to identify risks and issues with how contracts are written that could have negative impacts on their clients. They redline items, edit contracts, and counsel clients if they should sign or not or help them negotiate better terms. AI can help analyze contracts in bulk as well as individual contracts.

4. Predict legal outcomes

AI has the capability of analyzing data to help it make predictions about the outcomes of legal proceedings better than humans. Clients are often asking their legal counsel to predict the future with questions such as “If we go to trial, how likely will it be that I win?” or “Should I settle?” With the use of AI lawyers are able to better answer such questions.

Until next time,

Manja

P.s.

If you are interested in what we are developing for Law Firms and Legal profession contact me on Linkedin or Instagram or schedule a call with me here

P.s.s

1. My mission is to become #NextForbesUnder30

2. I am one of the first Women Machine Learning Entrepreneurs in Serbia

3. I have run 4 half-marathons in Belgrade

4. We are developing PyperAI to help lawyers reduce time and risk and focus on making more deals

5. If you need help on your ML or AI project, contact me or my team

What is Deep Learning?

Just over 20 years ago people didn’t even know what the internet was. Today we can’t even imagine our lives without it. Today I am going to give you a quick overview of what deep learning is and why it’s picking up right now.

And the reason why we are going back to the past is that neural networks along with deep learning have been around for quite some time and they’ve only started picking up now and impacting the world right now. But If you look back at the 80s you’ll see that even though they were invented in the 60s and 70s they really caught on to a trend or called the cold wind in the 80s so people are talking about them a lot. There was a lot of research in that area and everybody thought that deep learning or neural networks were these new things that are going to impact the world. That is going to change everything. That is going to solve all the world problems.

What happened? Why did the neural networks not survive and not change the world then? The reason for that is that they were just not good enough. They are not that good at predicting things and not that good at modeling.

Trending AI Articles:

1. How ethical is Artificial Intelligence?

2. Predicting buying behavior using Machine Learning

3. Understanding and building Generative Adversarial Networks(GANs)

4. AI & NLP Workshop

Or is there another reason?

Well actually there is another reason and the reason is in front of us. It’s the fact that technology back then was not up to the right standard to facilitate neural networks in order for neural networks and deep learning to work properly. You need two things:

  1. data
  2. strong computers to process that data

So let’s have a look at how data or storage of data has evolved over the years and then we’ll look at how technology has evolved.

Here we got three years 1956, 1980 and 2017.

How much did storage look back in 1956? Well, there’s a hard drive and that hard drive is only a 5GB. That’s five megabytes right there in the first picture and it is the size of a small room. In the first picture that’s a hard drive being transported to another location on a plane. And that is what storage looked like in 1956. In 1956 you had to pay two and a half thousand dollars of those days dollars to rent that hard drive to rent it not buy it, for one month.

In 1980 the situation improved a little bit. So here we got a 10-megabyte hard drive for three and a half thousand dollars. It is still very expensive and only 10 megabytes. So that’s like one photo these days. And today in 2018 we’ve got a 256 gigabyte SD card for $150 which can fit on your finger. And if you’re reading this blog a year later or like in 2020 or 2025 you probably laughing at us. All because by then you have even stronger storage capacity.

But nevertheless, the point stands. If we compare these across the board and we even taking price and size into consideration, so from 1956 to 1980 capacity increased about double. From 1980 to 2013 a huge jump in technological progress. And that stands to show that this is not a linear trend. This is an exponential growth in technology and If we add into account price and size you will be in the millions of increase.

And here we actually have a chart on a logarithmic scale.

If we plot the hard drive cost per gigabyte you’ll see that looks something like this. We’re very quickly approaching zero. Right now you can get storage on Dropbox and Google Drive which doesn’t cost you anything. Over the years this is going to go even further. Right now scientists are looking into using DNA for storage. And right now it’s quite expensive. It costs $7000 to synthesize two megabytes of data. But that kind of reminds you of this whole situation of the hard drive and the plan you know that this is going to be very very quickly. 20 years from now everybody’s going to be using DNA storage If we go down this direction. And here are some stats on that so you can explore it further. And basically you can store all of the world’s data in just one kilogram of DNA storage or you can store about 1 billion terabytes of data in one gram of DNA storage.

That’s just something to show how quickly we’re progressing and that this is why deep learning is picking up now. We are finally at the stage where we have enough data to train super cool and super sophisticated models. Back then in the 80s when it was first initially invented just wasn’t the case. And the second thing we talked about is processing capacity.

Here we’ve got an exponential curve again on a log scale. This is how computers have been evolving. This is called Moore’s Law, you’ve probably heard of it. You can see how quickly the processing capacity of computers has been evolving.

Right now we’re somewhere over here where an average computer can be bought for a thousand bucks at the speed of the brain of a rat. Between two and five years it will be the speed of a human or 20:23 and then by 2050 or 2045, it will surpass all of the humans combined. Basically, we’re entering the era of computers that are extremely powerful that can process things WAY faster then we can imagine. All of this brings us to the question: What is deep learning? and what is this whole neural network situation? What is going on? What are we even talking about here?

Geoffrey Hinton

This gentleman over here Geoffrey Hinton is known as the godfather of deep learning. And he did research on deep learning in the 80s. He’s done lots and lots of research papers. He works at Google. So a lot of the things that we’re going to be talking about actually come from him and you can see a lot. He’s got quite a few YouTube videos. He explains things really well so I highly recommend checking them out.

And so the idea behind deep learning is to look at the human brain. This is going to be quite a bit of neuroscience coming up. And in these blog and ones coming up what we’re trying to do here is to see how the human brain operates.

You know we don’t know that much. You don’t know everything about the human brain but that little that we all know we want to mimic it and recreate it. And why is that? Well because the human brain seems to be one of the most powerful tools on this planet for learning, adapting skills and then applying them. If computers could copy that then we could just leverage what natural selection has already decided for us. All of that kind of algorithms that it has decided are the best which are going to leverage that. Why reinvent the bicycle ride? So let’s see how this works.

Here we’ve got some neurons so these neurons which have been smeared onto glass and then have been looked under a microscope with some coloring.

And this is you can see what they look like. They have a body, they have these branches and they have like tails and you can see that they have a nucleus inside in the middle. That’s basically what a neuron looks like in the human brain.

There are approximately 100 billion neurons all together so these are individual neurons. These are actually motor neurons because they’re bigger. They’re easier to see but nevertheless, there are a hundred billion neurons in the human brain. And it is connected to as many as about a thousand of its neighbors. So to give you a picture this is what it looks like. This is an actual data section of the human brain.

cerebellum

This is the cerebellum which is this part of your brain at the back. It is responsible for keeping a balance and some language capabilities and something like that. So this is just to show how works. How many neurons there are like billions and billions and billions of neurons all connecting. It’s like we’re talking about five or five hundred or a thousand or millions billions of neurons in there. And so that’s what we’re going to be trying to recreate. How do we recreate this on a computer? Well, we create an artificial structure called an artificial neural net where we have nodes or neurons and we’re going to have some neurons for input value so these are values that you know about a certain situation.

So, for instance, you’re modeling something you want to predict something you always could have some input something to start. Your prediction is off then that’s called the input layer. Then you have the output. So that’s of value that you want to predict or it’s surprising whether it’s is somebody going to leave the bank or stay in the bank. Is this a fraudulent transaction it’s a real transaction and so on. So that’s going to be the output layer. And in between, we’re going to have a hidden layer. So as you could see in your brain you have so many neurons. Some information is coming in through your eyes, ears, and nose so basically your senses.

And then it’s not just going right away to the output where you have the result. Is going through all of these billions and billions and billions of neurons before guess output. This is the whole concept behind it how we’re going to model the brain. We need these hidden layers that are there before the output to the input Layer neurons connected to hidden Layer neurons. And they connect to output Layer. This is pretty cool.

But what is this all about? Where is the deep learning here, or why is it called deep nothing deep in here? While this is kind of like an option which one might call shallow learning where there isn’t much indeed going on.

But why is it called deep learning Well because then we take this to the next level we separate it even further and we have not just one hit and there we have lots and lots and lots of hidden layers and then we connect everything just like in the human brain connect everything interconnected everything? And that’s how the input values are processed through all these hidden layers just like in the human brain.

Then we have an output value and now we’re talking deep learning.

So that’s what deep learning is all about on a very abstract level. And the further blogs I am going to write will dive deep into deep learning and by the end of it you will know what the deep learning is all about and you will know how to apply it in your projects.

Super excited about deep learning can’t wait to get started and I look forward to seeing you in the next blog or vlog.

Until then enjoy deep learning,

Manja

For more follow me on Linkedin, Instagram or Quora.

Deep Learning Applications & Advanced Analysis in Hydrocarbon Refineries

Deep learning is an umbrella term that covers many things. In a strict sense, it is using computer programs to do what intelligent humans could do, and often doing it even better. Deep learning (cognitive science), which is the most popular computer science course now in USA and European universities.

Five Attributes of Deep Learning

The cognitive tasks of AI can be divided into five categories:

(1) Perception.

(2) Learning.

(3) Forecasting.

(4) Reasoning.

(5) Coordinating.

With perception, deep learning can understand the environment with sensing, and detect and recognize occurrences; is that smell a fuel leak? From this it can learn by synthesizing that information into knowledge; this could be learning the relationship between temperature set points and distillate yield.

Extract value from the generated data by being able to forecast with high precision. We can simulate outcomes such as reservoir performance (IPR) at various topside operating conditions.

When it comes to solving logical problems, or reasoning, deep learning can make decisions or suggest the best solutions; given what I know, what is the optimal distribution of my products at different terminal sites?

Despite the expanding range of problems deep learning can solve, there is one thing which no deep learning program has been able to replace humans in: defining the problem itself.

Given the obvious benefits that can be derived from adopting ML in refineries, what are the challenges that downstream oil and gas companies face when they embark on a program?

One of the biggest mistakes that companies make is that they embark on deep learning without first defining the problem. They collect lots of data, but do not know what to do with it, since they do not know what problem they are trying to solve by collecting all this data.

Machine Learning roles in Dynamic Nodal Analysis

One of the most popular subcategories today is machine learning. In fact, machine learning has become so popular that many people equate machine learning to deep learning.

Machine learning is popular because it overcomes scientific unknowns through large quantities of historical data, and hence has made fortunes for companies that in the past found their data too complex to interpret.

Machine Algorithm Definition:

Machine learning is based on pattern recognition, and machine learning methods consider all data as either inputs (features) or outputs (prediction). Multiple inputs are fed into an algorithm that produces an output. If the output does not match the actual data, the algorithm is tweaked to do better next time. This is called training in machine learning.

Because machine learning relies on large quantities of data about the same subject, it is better at very focused problems and parameters, such as what is the relationship between vibration and engine failure?

Machine learning behaves poorly when the problem is a system problem with more complexity, such as a refining process or a logistics supply chain for oil that has many moving parts, which prevents repeating patterns.

It can also struggle when most of the information is domain specific, such as the pressure setting on the steam boiler that has a certain relationship with the steam energy generated and subsequently the processes in the distillation column. Such domain-specific information from the data cannot be utilized unless an engineer or data scientist has spent time to structure and correlate the data to correctly represent the relationship between them; this is something that machine learning cannot replace. The cost of this manual work is often ignored when companies want to train their data. They end up not having meaningful conclusions.

Another problem occurs when time and sequence are important. Most machine learning programs do not incorporate time-based patterns. For example, the best way to predict the loading queue at the terminal in the next hour is to count the current queue length. Fuel demand estimates at a retail fuel station require information such as which month of the year and which day of the week it is in order to predict more accurately. This is where time series come in. The central point that differentiates time series problems from most other statistical problems is that in a time series observation are not mutually independent. Rather a single chance event may affect all later data points.

Yet, existing time series technology alone does not solve all the new problems either. Enterprises are trying to aggregate and store all data in time series format, which understands time, but misses all domain correlations. This correlation across the domain of operations is critical for gaining contextual intelligence. Even though historian has been a familiar technology to first use, it is not sufficient.

Companies should consider the nature of the problems before they invest. You need the right AI tool based on the problem you have defined. Define the problem first, so that you can select the right tool. Do not make the auto industry’s mistake.

Categories of problem in downstream oil and gas

1-Scheduling/ allocation/ coordination problems:

2-Process optimization:

3-Monitoring, detection, faster responses:

4-Supply chain logistic:

Terminal Efficiency

In downstream terminals, maximizing loading efficiency can have a significant impact on the performance of terminal operations. Scheduling is a complex process (truck arrival time, terminal queue, loading bay queue, loading time) and the multiple combinations of trucks that require different products, against the required volumes and flow rates from pumps into different loading bays. The number of calculations becomes exponential as you consider all the variables in this process and becomes a nearly impossible task for humans.

Today’s manual process is typically experience based with some amount of guesswork, which does not optimize terminal operations. With predictive analysis from A-Stack, these different variables can calculate optimized scheduling, to determine for each truck which particular loading bay it should use. It minimizes overall queuing for the terminal and maximizes loading efficiency. It improves supply chain logistics.

Conclusion

Five categories of intelligence can be concluded based on various modes of operations and different areas of applications in oil and gas industry.

1 Future intelligence: the ability to forecast future events with good confidence & high accuracy based on the learning from current events.

2 Historical intelligence: the ability to understand what happened.

3 Contextual intelligence: the ability to correlate multiple factors in a context and make sense of what is happening.

4 Domain intelligence: the ability to deepen domain knowledge/science

5 Logical intelligence: compute numerous logical conditions simultaneously and find the solution.

Until next time,

Emad Gebesy, Founder of Optimize Global Solutions, Subject Matter Expert