A key issue for companies evaluating whether or not to integrate AI into their business is that they are looking to peers and competitors as a driver to do so.
The glaring issue with this is that if those firms release information on their implementations, it will likely be entirely performative press releases (i.e., pure failure) or they simply won’t release real information on the matter. Not releasing any information is the optimal approach if you are a firm deploying real solutions.
Returning to the issue at hand, this leaves firms who are unsure as to how to proceed with only poor examples of the technology and will lead them to either create a weak, flawed system or elect not to utilise it at all.
Instead of judging one's own company by actions undertaken by competitors, firms need to look inwards at their own processes, ignoring what others are or aren’t doing. It matters not if your law firm competitor has an article in the Financial Times talking about how they are using AI; it is likely largely performative and marketing content that should not inform decision making.
Instead, firms should purely evaluate their options on their internal metrics, looking towards the future of their business and the likely future of their industry.
In our previous article, we looked at common deployment failures and some reasons for those, a common theme being groups not understanding the technology or the problem in terms of systems building. We won’t rehash that here, but we will go into how to go about beginning to start evaluating possibilities.
Starting the Evaluation: The Right Path
The first step on the proper approach is to correctly frame your workflows, your issues, and your industry's requirements. Many of the issues alluded to in our previous article are failures in identifying one of these areas, so starting on the right path, how do you evaluate whether your position has potential?
1. Patterns
AI in its current iteration is pattern recognition. If your processes are largely pattern-based, that is the first positive mark on the chain. LLMs cannot provide real, reliable analysis on their own, but if they are fed sufficient sample data which demonstrates patterns, they can be used in such a manner. Find them, define them, what their inputs are, what their outputs are.
2. Definite Metrics
If your industry deals in real metrics or where speed and performance are in similar demand, AI can be notched up another step on the chain. We are not referencing ‘content generation’ or other purely output-driven areas, but rather segments where you can create an accuracy rating, success rating, or any metric based on the underlying data. Examples in finance/accounting are predictive accuracy; in law, it can be semantic similarity or relevance scoring.
3. Decomposability
It is hard at the outset to do the previous step before this, and depending on the industry, this step should likely be at #2, but for many nontechnical decision makers, the former step is clearer in their minds. The tasks/jobs/workflows, however you define it, should be capable of being broken down into constituent, separate parts. If this cannot be done, the task is fruitless. An easy example: An auditor needs to screen documents for a certain, defined issue, and the documents follow a common structure.
This can be decomposed (to a certain granularity) into:
- Document reading
- Search
- Document comparison with reference
- Success or fail metric.
You will go deeper in actual implementation, but this is illustrative.
Answering the Tough Questions
Combining these aspects into a common approach will leave you with a decent picture of your own firm in terms of the components to be used for AI systems and their appropriateness. The questions that you may then require expertise to answer are:
- How do you use this to evaluate and build a system?
- What would it cost?
- How would it function?
This last point is imperative as a system can look good on paper until you realise that your employees don’t like using it, they’re worried it might subsume some of their responsibilities, or that in practice, it removes some time constraints but adds some of its own in return.
Why Bother?
Given what we have discussed, why bother? Why not let some other group figure it out?
Because it's not costly, to do it right. Consider it as an options contract to hedge your firm against the future. You may be skeptical, but the opportunity cost of it is sizable. Competitors are unreliable benchmarks, either because of incompetence or extreme competence. Your firm does not need to make large capital expenditures on training and building large AI systems. Your only goal at the outset should be to come to a strong conclusion either way on whether or not your firm stands to benefit from any of this.
This is a simplified version of the approach we take when designing and evaluating the usability of AI-based systems. Each case is very different and often hinges on the specific human element within a firm, so we generalise here and look down at a relatively high level.
In our next article, we will go through the basic construction of a workflow at a lower level, based on an element from our platform.
You can contact us with questions or feedback at contact@murnell.net.