At some point in your legal career, you’ve heard this phrase or possibly uttered it yourself.
In the context of case law research to support or refute a particular position, finding a good case on all fours with your starting point can turn an argument into a principle and potentially binding precedent. However, traditional research tools make it difficult to do this quickly. Topical search tools, chasing citations and noting up, and proficiency in crafting just the right search string take time and still leave a lot of ground uncovered.
Additional challenges in finding the proverbial needle in the haystack when researching can include the time you have to research, your skills as a researcher, and the budget available to keep circling through your resources again and again, applying different terms and methods each time until you find what you need or conclude there is nothing more to find. More broadly, it’s generally beyond human capabilities to run every possible combination of factors through separate searches to find what you are looking.
In late September, vLex introduced “Vincent”, an intelligent assistant designed to tackle precisely these challenges. Whether your starting document is a judgment, factum, memorandum, academic article or even a blog post on a legal matter, Vincent takes on the task of finding “something like this” in seconds. Through AI (artificial intelligence) techniques such as Natural Language Processing, Machine Learning and semi-supervised Deep Learning, Vincent runs the searches that humans would do if they had the time (key phrase, legal topic matching, citation and note up) as well as the searches that humans simply can’t do (semantic similarity at word, phrase, paragraph and document levels), in order to develop a results list of contextually related materials that can be further searched, sorted and filtered to find what’s needed.
International legal research platform vLex, in conjunction with vLex Canada (formerly Maritime Law Book and Compass Law), developed Vincent as a set of capabilities trained on the law in each jurisdiction in which it’s offered and designed to introduce the concept of “contextual research” to Canada and several other countries. The model was pioneered in the United States a couple years ago by legal tech startup Casetext, and has since been emulated by legal tech darling ROSS Intelligence, but neither of those were designed to incorporate concurrent search into secondary source material, or into third-party licensed content or a firm’s own research resources. As an example of the latter, vLex Canada is working with Supreme Advocacy to develop appellate workflow tools to accelerate and improve legal research.
Though new to legal research, the idea of contextual search is something we already experience in other parts of our lives (think of Amazon telling you that “people who bought this also bought this”; or Netflix recommendations when you search for a movie they don’t have), and in an era of information overload, it’s becoming a virtual necessity as traditional research methods reach their natural limit. In the legal space, this is also a natural extension from the places where we have seen AI deliver the “better, faster” approach over the past few years. Areas like document review for due diligence or discovery purposes, and areas like quantum or predictive tools to assist in determining the cost of settlements. In those examples, as in contextual legal research, the objective is to move quickly and confidently through a large volume of information as if a skilled professional had spent the time doing each step or reviewing each document one-by-one. In recognizing that the initial repetitive, time-consuming and costly tasks of finding “something like this” or the needle in the haystack can be shared with confidence to an Intelligent Assistant, then lawyers can focus their efforts and skills on subsets of relevant documents, and managing partners breathe a little easier knowing that the amount of unbillable research time is going to go way, way down.
Any article or other information or content expressed or made available in this Section is that of the respective author and not of the OBA.