Five AI Priorities for Digital Trust Professionals

Author: Goh Ser Yoong, CISA, CISM, CGEIT, CRISC, CDPSE, CISSP, MBA
Date Published: 5 February 2024

In today’s digitally driven world, the proliferation of artificial intelligence (AI) has undeniably transformed how we live, work and interact. We are regularly observing exciting breakthroughs, such as the recent discovery by AI and supercomputing of new material that could reduce usage of lithium in batteries. However, this rapid integration of AI technologies has also brought about legitimate concerns regarding ethics, biases and accountability.

For digital trust professionals, the task of fostering confidence and ensuring ethical AI practices becomes paramount. Here are five key priorities that are worthwhile to consider prioritizing for establishing trust in AI systems in the year ahead:

1. Awareness of Misinformation, Deepfakes, and Disinformation

In 2024, more than a dozen countries will be having elections. Individuals must be aware that there is increasing risk associated with misinformation, scams using deepfakes, and disinformation and manipulation during elections. For instance, there already have been instances of fake videos impersonating Singaporean leaders luring the public into investment scams. Personal vigilance is key to preserving the integrity of information and democratic processes. Similar to how phishing and vishing attacks attempt to trick individuals into going to fake websites, individuals should verify whether the news or videos they receive are authentic with the relevant sources. There is an onus on each individual to pay closer attention to videos that could be “too good to be true” and to confirm the validity of shocking content by cross-checking multiple sources.

2. Expect Increase of Regulation in AI Usage

The promises of digitalization and innovation powered by AI leading to futuristic trends such as autonomous vehicles and a breakthrough in disease research is certainly promising. However, when done in a risky manner, that can jeopardize human lives. Hence, with the EU taking the lead in 2023 in passing the AI Act, which will come into effect in 2025, digital trust professionals can expect an increase of regulation to follow suit from the other regions, industries and countries. Therefore, it would be wise to stay abreast of the potential impacts from those regulations.

As governments and regulatory bodies seek to address ethical concerns and ensure responsible AI practices, understanding these regulations and the compliance requirements is crucial. This may require a solution to have certain new certifications or enhancements, such as watermarking videos generated by AI. Awareness and foresight of these impacts would empower digital trust professionals to better advise what is required for their organizations to prepare for the development of AI regulatory frameworks.

3. Assess Data Privacy Risk and Transparency in AI Utilization

On the other hand, as AI typically requires a huge amount of data, digital trust professionals should take into consideration the data privacy risks and threats associated with excessively collecting or sharing personal data for AI applications. Excessive data sharing can lead to privacy breaches, identity theft and unauthorized use of personal information.

Being vigilant about data privacy safeguards individuals from potential harm and ensures responsible and ethical use of personal data in AI applications. If there is any usage of AI, digital trust professionals need to advocate for transparency, which stands as the cornerstone of trust-building in AI systems.

Disclosing the use of AI within systems would empower individuals to make informed choices about engaging with these technologies. By offering clear communication about AI integration, individuals can gain an understanding of how their data is being processed and utilized. This transparency fosters a sense of control and allows users to evaluate the risks and benefits associated with AI-enabled systems.

4. Understanding AI Decision-Making Processes & Fairness

Understanding how AI models arrive at decisions is fundamental for building trust. By demonstrating and advocating for the necessary governance to how algorithms are built in powering AI, users can gain visibility into the factors influencing an AI model’s outputs, whether they are decisions or recommendations. Clarity regarding the decision-making process ensures consistency and reliability. When users can comprehend the rationale behind AI-generated outcomes, trust in the system’s accuracy and fairness is bolstered.

Behind the decision-making by AI is the input, which is the data. As fairness is another critical aspect of trustworthy AI, digital trust professionals must ensure that the data used to train AI models is representative and devoid of biases. Preventing unintentional discrimination requires rigorous measures to identify and mitigate biases at every stage of the AI lifecycle, from data collection to model development and deployment. That makes the task of building a fair and transparent AI all the more challenging.

5. Ensuring Safety and Resilience

Assuring the safety and resilience of AI systems is imperative, particularly when the outcome could affect a person’s life such as in the case of AI in self-driving cars or other autonomous vehicles. Users must trust that these systems will not cause harm and will perform reliably under various circumstances, including encountering unexpected inputs. Robustness and resilience in AI models instil confidence that they will function according to their intended purposes without compromising safety.

In that regard, it would be worthwhile for digital trust professionals to consider picking up quality assurance and testing skills. Besides that, existing regulations within the affected industries should also be studied to better appreciate and anticipate how AI would augment them. These can be seen with the increasing number of legal suits brought against technological giants such as Meta, Google and Amazon.

Make 2024 the Year to Take Stock of AI Impact

2024 will be an exciting year for digital trust professionals involved with projects that utilize AI (both professionally and in their personal lives). However, depending on the development of regulations, resources and other external factors, 2024 could also be a year to take stock and revisit the impact of AI. This is because building trust in AI requires a multifaceted approach that prioritizes transparency, fairness, safety and effective governance. Digital trust professionals play a pivotal role in ensuring that AI systems are not only technically robust but also ethically sound and aligned with societal values that take individuals’ privacy rights into consideration. The security risks and threats that AI pose have grown exponentially within the past year and will only become more challenging to navigate with AI being weaponized and posing geopolitical threats from malicious actors.

By considering the above five priorities, digital trust professionals can be better informed and positioned to assist their organizations in developing responsible and trustworthy integration of AI, fostering a future where AI enhances human potential while upholding dignity, fairness and reliability.