As the UK government indicates that the market for ensuring trustworthy AI could grow six-fold by 2035, and is worth as much as £6.5bn, businesses urgently need to understand how to manage both the potential and the risks of this technology.
With regulators globally introducing ever more frameworks and compliance standards, founders who want to integrate or build on AI effectively need to prioritise trust and privacy. This year alone, regulators have introduced the EU AI Act, multiple US states are attempting to pass their own laws and the UK’s new technology Secretary has said the British government will introduce AI safety legislation in the next year.
Understanding the impact policy and legislation could have on the products you are building is crucial. Any missteps could be costly. Understand the privacy implications of your AI products and invest in good data practices is key.
It is no surprise that some of the earliest AI enforcement action was on privacy grounds, given the often unintentional inclusion of personal data among the vast amounts of data used to train AI tools, as well as the privacy risks in deploying certain AI applications like facial recognition technology.
Build your product or service with privacy in mind at both the training and deployment stages. Adhere to relevant data protection legislation, minimise the amount of personal data you’re using, anonymise or pseudonymise data where possible and be clear about your data practices to establish trust with end-users, clients and regulators including through a public-facing privacy policy on your website.
Darktrace’s Ben Lyons makes the point that products need to be responsible from the beginning. His advice is that regulations are likely to tighten and expectations are likely to get more specific, so think about how you demonstrate the trustworthiness of the AI you’re building.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataProtect your intellectual property
As a founder, you want to ensure you’re protecting the key intellectual property (IP) your company is built on through relevant agreements, and conversely, avoid using third-party IP without clarifying that you have the right to do so. When developing and using AI, the value or provenance of content or information used as training data isn’t always clear, so make sure you have strategies to appraise the value of your ideas, as well as to trace content back to its origin.
Consider context. AI technologies are only as good as the datasets they are trained and tested on. To avoid discriminatory or otherwise detrimental outcomes for your company or users, you should prioritise time to detect potential biases and monitor areas for improvement.
Be clear with customers about what your AI tools can and can’t do. This means documenting your processes. Context-specific scorecards and transparency around model training, data usage, your AI’s outputs and related issues are becoming more common in legislative proposals so if you’re training models, you need to be ready to discuss your dataset and methodology, and how your system reaches specific outcomes. Indeed, this will likely become a regulatory requirement.
Establish relevant regulatory developments
Conduct risk mapping to determine which AI risks might be most relevant to your business. Any founder who integrates AI into a product will need to ensure they are across risks that third-party tools could lead to.
Take time to understand the current AI regulatory landscape and how it may apply to your products and operations – both for internal and external use. Collaborate with legal experts, join industry groups to represent your interests and actively participate in those policy discussions that are most crucial to your startup’s success. Look into AI regulatory sandboxes to get involved in if you’re operating in a sector or use case where there is some uncertainty. Early and consistent engagement with regulators also presents opportunities to shape future regulations, positioning yourself and your company as a proactive leader in the field.
As James Clough, CTO, of RobinAI, points out regulators are typically just trying to protect consumers from being harmed – an aim that should not conflict with your business. The more the regulator understands about the benefits of your technology, the better. For this reason startups should consider and monitor regulatory risk. Clough recommends engaging with regulators early on.
Establish a responsible AI framework
Work with your team and external counsel to develop principles and practices that reflect your company values, ensure compliance with the law, and minimise potential liability exposure to customers and third parties. Early investment in compliance infrastructure such as data governance, documentation and ethical review processes, personnel policies and sensible terms of use for customers can save startups significant time and resources in the long run. Continually assess whether your AI tool is functioning in the way you intended it to, and consult with stakeholders impacted by your AI use. Don’t forget to establish clear lines of accountability to identify the people who are responsible (and liable) for ensuring these values are implemented and acted upon.
Having frameworks and principles in place is a great starting point, but you still need to make them work in practice. Provide your teams with AI ethics-related training to educate them about the implications of AI systems and to instill a culture of confidence in the technology. This will also help you anticipate issues early. Establish what the gaps are in your and your team’s understanding, your understanding of your customer base’s expectations, and how you plan to address any gaps.
To summarise, the era of move fast and break things is well and truly over. To make AI work for your business, you’ll need to challenge your company on your compliance plans and educate yourself from the outset. Doing your homework now will pay off in the long run.