The mainstream introduction of Generative Artificial Intelligence (GenAI) has ushered in a transformative shift in perspectives surrounding artificial intelligence. Previously perceived as either a behind-the-scenes automation tool or through the common Hollywood story of robots gaining sentience and killing us all – AI’s recent capabilities, particularly GenAI’s creative prowess, have sparked a noticeable paradigm shift.
In just under a year, GenAI has succeeded in democratising and fast-tracking the conversation on AI usage. No longer just a conversation for CIO’s and IT teams of large, advanced companies with mature IT/Data teams and advanced security practices; we are now seeing companies of all sizes and levels of maturity rushing to embed these new technologies within their processes, teams and infrastructure. The question has thus become how can these companies ensure they are implementing the right supporting structures to strike their optimal balance between AI risk and AI reward?
Documented benefits are still forming – Gartner does not yet have robust data on GenAI ROI as part of their research. A recent Gartner webinar survey indicated that 70% were in exploration phase for GenAI, 19% were piloting and only 4% have GenAI applications in production, reflecting the evolving readiness for AI adoption.
To achieve the benefits, we need to balance adoption and acceptance, with risk mitigation.
AI enables slick, personalised experiences and communications – meeting the rising expectations of customers. Businesses can tailor offerings to heavily tailored individual preferences, fostering stronger customer relationships and retention.
Automated Decision Making:
Certain business decisions can be efficiently automated, enhancing operational efficiency and accuracy. Distinguishing between decisions suited for automation and those requiring human judgement is key.
Cost Reduction and Efficiency Gains:
AI adoption can lead to substantial cost reductions through process automation and optimisation. Faster delivery, logistics improvements, and supply chain optimization contribute to efficiency gains. Especially in a tight labour market where hiring is becoming harder and harder. Augmenting internal skill sets, and increasing the productivity of already overstretched technical teams with AI enabled functions such as AI code generators, can help to take the bite out of the skills shortage in the short term.
AI can aid in Customer insights analysis, segmentation and marketing optimisations in many ways. For example, through its ability to efficiently distil huge volumes of customer phone calls, web chat and other communication types into various high value business outcomes including early warning indicators to changes in customer behaviour or sentiment, new product ideas and competition risk factors. It can also be used to identify optimal pricing, and efficient supply chain networks.
AI excels in detecting and mitigating threats by leveraging its capability to analyse vast datasets swiftly, enabling the identification of unusual activities. AI-driven systems can generate and prioritise alerts, ensuring that potential security issues are promptly addressed. Furthermore, AI’s predictive capabilities extend to forecasting potential attacks, providing valuable insights into preemptive security measures. In essence, AI stands as a formidable ally in fortifying security, enabling organisations to proactively safeguard their digital assets and operations.
Poor Return on Investment: Make no mistake, organisations will probably burn initial investments on GenAI. Poorly prioritised, poorly directed and poorly skilled projects will test leadership patience on expenditure. This is without the downside of the risks below. The longer term benefits and imperatives are almost undisputable, but in the meantime mitigating excessive waste requires more balanced expenditure between technology, people and process.
Reputational risks loom large for companies navigating the evolving landscape of AI and GenAI. A misstep in deploying either of these technologies can trigger a swift and unforgiving backlash from customers, stakeholders, and the wider public. Whether it’s an AI-powered recommendation algorithm reinforcing harmful biases, or a GenAI model generating controversial content, the brand consequences can be severe.
Misinformation and Security Threats:
GenAI’s capacity to generate content on a large scale introduces significant risks, prominently encompassing misinformation and security threats.The proliferation of bots disseminating deceptive information and orchestrating tailor-made phishing attacks underscores the inherent vulnerabilities of AI-driven content creation. This extends to the disconcerting potential for large-scale misinformation campaigns, as bots effortlessly spawn numerous social media sites, undermining a company’s credibility. Generative AI’s potential to impact cyber-attack risks, including malware and custom-tailored phishing attacks rooted in social engineering, calls for vigilant attention and countermeasures to safeguard against these evolving threats.
To counteract these risks, proactive measures can be taken to cross-check and minimise the dissemination of falsehoods. Demanding that AI models like ChatGPT cite their information sources and subsequently verifying those sources can serve as a vital safeguard. Additionally, organisations should consider restricting GenAI to use only specific, reliable sources of information, recognising the powerful influence of repetition in shaping beliefs.The primary strategy for mitigating these threats lies in the implementation of robust processes and comprehensive training, emphasising the need for ongoing vigilance and adaptability.
The environmental cost of running the massive data centres required for AI models raises sustainability concerns. The ever-increasing expectation by governments and customers on minimising environmental impacts needs to be considered in an ESG context. Addressing these concerns while optimising AI’s benefits is a complex challenge that businesses must navigate.
Human Oversight and Verification:
AI can augment human capabilities, but it should not replace human involvement, especially in critical paths like medical diagnostics. Human error already exists, now we are introducing a new risk of errors – but implementing AI-assisted verification processes can help minimise errors.
Striking the Right Balance
Achieving the delicate balance between AI’s risks and rewards necessitates a holistic strategy, encompassing ethical considerations, agility, value measurement, and cultural adaptation.
Begin by crafting your ethical guidelines independently, rather than relying solely on legislators or industry giants. Establish your ethical boundaries and responsibilities, engaging in extensive discussions about potential scenarios and ‘what-ifs’ with every layer of your organisation. Develop robust standards and policies tailored to your specific company, and be transparent with both staff and customers regarding data usage to empower informed decisions and mitigate the risk of bad PR. Collaborate across diverse groups within the organisation to gain a comprehensive view of AI’s impacts in all areas of the business and to all customers.
Recognise that AI technology is not static; it continues to evolve at lightning speeds and often it is in the overlap of two complementary but separate advancements that the hidden risks occur. To navigate this dynamic landscape, embed the ability to pivot and adapt quickly. Instead of merely building a foundation, focus on ongoing scaffolding. Cultivate a culture of continual conversation and progression supported by the right mix of organisational strategy, reliable processes, talent development, and culture. Understand, evaluate and train – your staff mindset and understanding. Ensure your goals encourage an ethics- first approach.
Clearly articulate the value AI brings to your organisation, moving beyond a vague strategy of “start using AI” and actively measuring and tracking value while understanding the associated costs, including opportunity costs. Don’t jump in because of FOMO (or fear of falling behind). Use the right technology for the right use case with the right scaffolding. Start with small proof-of-value projects and establish clear stage gates for transitioning these projects into productionised assets.
Consider bringing in AI specialists and providing internal training, as AI implementation is a complex endeavour. Putting your team on a three week internet course is not going to end well. Invest in your people.
Focus on cultural change as a fundamental aspect of AI adoption, understanding that digital transformations require significant cultural shifts. If you haven’t already made this change either through a standalone digital transformation or a data transformation then buckle up – you are in for a bumpy ride if you don’t focus on cultural change enablement before any technology changes. Culture is the most important part of AI adoption. Good leadership, open and honest conversations, bringing people along, training, digital literacy. -start the maturity journey early
Data quality remains a critical factor in AI success – it’s only as good as the training of its model, and thus the data it is trained on. Prioritise creating trustable and transparent data.
Your AI strategy should be supported by your data strategy, which in turn should align to your business strategy. Determine how AI differentiates your business in a competitive landscape where many are adopting similar technologies. Understand the purpose of AI for your business, and then develop the approach based on the goal.
There’s no doubt AI will be disruptive across all industries in the medium to long term. Ultimately, the impact of AI on business value hinges on the foundational elements mentioned above.