Article

4 Areas AI providers should concentrate on for customer contract success

30 March 2022 | Applicable law: England and Wales

This article was originally published by CoverWallet. Click here to read on CoverWallet.

AI suppliers are more vulnerable to risk in system delivery options any time that humans aren’t “in-the-loop” or “over-the-loop.”

AI technology continues to change how we do business, and it presents an amazing opportunity for startups to find new and exciting ways to automate and integrate the customer service experience.

AI and machine learning can help companies more effectively process data from social media, telephone calls, and online chat sessions and convert it into useful information that can inform business decisions. The new technology makes communicating with customers easier than it’s ever been, and it presents tremendous benefits for a startup.

But when you negotiate with your enterprise customers, their standard contracts and arrangements might not be a great fit for artificial intelligence startups. You’re offering technology that’s still very much in the development phase, which is why it’s so essential to keep these four important issues in mind when you’re in scale-up mode and negotiating with potential clients to provide AI services.

1. Will we be held liable for our AI system’s decisions?

One of the most challenging issues that you’ll encounter during negotiations is determining to what extent you’ll be held responsible for the decisions and recommendations that your AI system makes. AI technology systems are still very much in their infancy, and as the supplier, you have a responsibility to stand behind your product if there are any setbacks. Most prospective clients will include wording in contracts that requires you to accept liability.

It’s certainly reasonable for a client to hold a supplier accountable, but AI technology presents a lot of unique challenges compared to similar software-as-a-service (SaaS) systems that don’t use AI. AI by its very definition is about reduced human involvement, and suppliers are more vulnerable to risk in system delivery options any time that humans aren’t “in-the-loop” or “over-the-loop.”

When an AI system integrates or underpins an SaaS system, there are always the risks of dynamic changes based on outdated, incomplete or biased data. Because you don’t always have the benefit of human judgment and control, you should include specific wording that excludes you from liability in a few different scenarios.

“Human-in-the-loop” AI systems

In the case of a “human-in-the-loop” system, the AI merely offers recommendations to human operators who then decide whether or not to act upon those recommendations. The raw processing power of AI provides a lot of advantages in untested environments, but it’s always prudent to approach those types of systems cautiously.

Since the human operators always remain in full control of the system and always have the ability to accept or decline the system’s recommendations, the AI supplier can’t reasonably be held accountable if the system makes faulty recommendations based on inaccurate or incomplete data.

“Human-over-the-loop” systems

“Human-over-the-loop” systems are a bit more autonomous than “human-in-the-loop” systems, but they still typically require human monitoring and decision-making under certain conditions. The system might be set up with fast track authority to make routine decisions through basic if-then logic, with elevation to human oversight and decision-making when certain conditions are met. If a human operator fails to adequately perform their duties, it’s not a reasonable expectation for the AI supplier to be held responsible for financial losses arising from negligence or human error.

“Human-out-of-the-loop” systems

The idea of a fully automated “human-out-of-the-loop” system is still highly controversial. Many leading scientists, economists and other experts predict disastrous consequences if a system has the ability to operate with impunity.

The industry has taken a cautious approach to deploying human-out-of-the-loop systems, and they shouldn’t be utilized in untested situations to make high-stakes decisions. Clients who choose to fully automate AI systems should do so at their own peril, and you shouldn’t be held liable if your client takes unnecessary risks.

2. Can we use customer data to improve our AI system?

In order to make system improvements, you should have access to customer data to inform your upgrades. It’s a very high priority to obtain the rights to that data, and you should have wording in your contract to that effect. These types of permissions aren’t always standard when it comes to enterprise-level contracts, so be sure that you have access to the customer data to make informed upgrades under the following conditions.

Access to de-identified data

The customer data should be de-identified to restrict access to confidential client information unrelated to system improvement. This will ensure access to the data that you need and limit your responsibilities and liabilities under General Data Protection Regulation (GDPR) guidelines.

Data cleansing provisions

The de-identified customer data should be properly formatted and cleansed so that it can fully inform future system upgrades.

Access permissions

You should have the permissions that you require to fully utilize the cleansed data to train and improve your AI systems to improve customer satisfaction.

3. Do we own the system’s analysis and recommendations?

Your company should generally own the system’s analysis and any improvements resulting from that analysis. You can’t over-emphasize the importance of any AI company owning the rights to mission-critical or business-critical data. Some enterprise-level contracts might include stipulations in their terms and conditions that don’t allow that information to be released. If that’s the case, it will severely diminish your ability to improve your product.

Be sure that the contractual definition of “customer data” includes wording about who retains the intellectual property rights to improvements or derivative works. Be sure to also insist that any ambiguities in contractual clauses are resolved to your satisfaction and don’t result in any IP-leakages or other breaches of confidential customer data.

4. Are we being held to a reasonable standard?

AI performance can often be limited to the quality of the customer’s engagement. If you and your team work together to develop and deliver a quality product, you certainly don’t want to be held responsible if the client doesn’t use it effectively. No matter how sophisticated or advanced your AI systems are, they need a certain amount of engagement from the client to be truly effective.

Unfortunately, your client will hold you accountable if your software or system doesn’t perform as expected. There should be clear language in your contract specifying the level of customer engagement that will be necessary for a successful operation. Incorporate the following information in your contract.

Minimum customer prerequisites

Include a detailed list of the minimum prerequisites that you’ll need the customer to provide you with in order for you to get the AI system properly configured and optimized.

Data quality standards

Use clear language specifying that the customer-supplied data meet minimum quality standards in order for your AI system and its components to perform properly.

Limits of biased or incomplete data

Include wording explaining that problems with AI system performance can be less effective when presented with biased or inaccurate data. If the customer inputs faulty data, it can confound system performance and lead to inappropriate responses.

No warranties offered or implied

No warranties will be made with regard to the AI system’s perceived effectiveness. Performance indicators should always be based on valid and observable information. Clear indicators and metrics should be written and documented in the contract.

COVID-19 pandemic provisions

The COVID-19 pandemic has led to overwhelming changes to the way that we do business, and every legal document should include some wording to address the ongoing issue. The global health crisis has certainly created a lot of challenges well-suited to an AI solution, but until we can better define the problems, you should determine effective rubrics to handle future pandemic-related opportunities and challenges. You might not be able to predict the future, but you can definitely plan for it.

Successful AI deployment should always be a true collaboration between the supplier and the client. The technology is still largely unknown and unproven, and it is often much more complex than the average business owner can understand. You’re providing the client a service that has only existed for a few short decades, and it’s your responsibility as the service provider to educate your clients and help them make properly informed decisions.

The power of AI is its ability to automate routine decision-making in order to free up the time and talents of human personnel. By putting clear expectations in writing, you and your clients will be able to partner effectively and use the exciting new technology of artificial intelligence to grow together.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.

Share

Related experience

As a full-service law firm, we are able to provide advice and information about a wide range of other issues. Here are some related areas.

Join the club

We have lots more news and information that you'll find informative and useful. Let us know what you're interested in and we'll keep you up to date on the issues that matter to you.