The ‘magic triangle’ of innovation, risk and costs
Generative artificial intelligence presents C-level executives with a complex set of challenges. Employees develop new ideas for the use of AI applications almost daily, and management does not want to miss out on the strategic connection to this key technology.
In addition, many employees are faced with an ever-increasing workload – from enquiries to administrative tasks – while resources are dwindling. Since the skilful use of generative AI promises to achieve more with less, its use is often driven by employees – whether with or without their employer's knowledge. In Switzerland alone, a good half of internet users have already used AI tools.
However, this also increases the risks for the organisation. The uncontrolled use of AI tools can lead to significant compliance violations and data protection problems – and often to incalculable costs. Many Swiss companies provide their employees with little or no clear guidelines and hardly any support in this area.
This challenge is further exacerbated by the extraordinarily dynamic development of the AI landscape. New models, services and providers are emerging at a pace that overwhelms traditional IT governance processes. At the same time, this puts even more pressure on IT budgets, while the scope for investment remains limited. For executives, this means they need solutions that both unleash their organisation's innovative power and make the associated risks and costs manageable. The key lies in well-thought-out AI governance that combines flexibility and control.
While the key to mastering these challenges is, of course, a comprehensive AI strategy, in practice many organisations find themselves under pressure to make generative AI available to their employees as quickly as possible, even if overarching issues have not yet been identified or clarified.
Below, we present an approach that allows precisely this, paving the way for the sustainable, strategic handling and productive use of AI.
The central AI hub as a pragmatic solution
Based on our work with private sector companies and public service organisations, we developed a structured governance approach that systematically addresses these risks without being overly restrictive. At its heart is an AI hub that serves as a strategic control instrument for the entire AI portfolio.
The first crucial step for organisations is to decide how and where they want to use generative AI and make it available to their employees. A potential analysis helps to identify the most important use cases with the greatest added value in everyday life. The second step is to implement these use cases in an open-ended and technology-neutral manner.
The technical heart of the AI hub is an intranet-based central platform that serves as a uniform access point and service catalogue for all generative AI use cases within the organisation. Such a platform abstracts technical complexity and presents users with simple, standardised services. The idea here is to cover the use cases that are most important to employees, which is why a potential analysis is also indicated here to ensure coverage of the use cases that are relevant and valuable to employees. Whether additional, e.g. subject-specific AI applications and services are used can be deliberately left open.
Example: an employee selects the ‘Summary’ function from the catalogue. Whether this function is implemented via a ChatGPT licence, a hosted LLaMA model or a self-operated solution is irrelevant to the employee; it is regulated in the background by appropriate middleware. This abstraction creates both user-friendliness and strategic flexibility for IT management, which can use various technical implementations in the background.
Automated compliance and efficiency through ease of use
The hub performs central control functions that go far beyond mere service access. Automatic filtering and classification of documents prevents the upload of sensitive data and ensures that compliance requirements are enforced at the technical level. At the same time, sensitive information is automatically anonymised before being forwarded to AI services, systematically minimising data protection risks. Another important aspect is the significantly improved training efficiency, as employees do not have to learn complex prompts but can use predefined, optimised workflows. This reduces both training times and the susceptibility to errors when using AI.
Strategic diversity of options and investment security through backend flexibility
The technical architecture enables the gradual implementation of various AI services, thereby creating strategic diversity of options for corporate management. Middleware mediates between the use cases in the hub and the effective technical implementation of generative AI. Cloud-based solutions offer the possibility of rapid integration of commercial services and enable an immediate start of production with proven technologies. Hosted models – known as model-as-a-service offerings – expand the spectrum with specialised cloud offerings that can be optimised for specific use cases while providing greater control over data processing and compliance. On-premise solutions, on the other hand, represent the maximum variant, in which proprietary models are used for maximum data control and compliance requirements.
This architectural flexibility allows AI investments to be scaled gradually while pursuing different sourcing strategies. Depending on the development of usage requirements and available budgets, investment decisions can be made in a targeted manner without rendering previous investments obsolete.
Governance mechanisms for management
The framework implements several levels of governance that are directly relevant to corporate management:
- Strategic management: The hub concept makes it possible to centrally evaluate new AI technologies and integrate them gradually without disrupting existing processes.
- Risk management: Systematic evaluation and categorisation of all use cases in terms of data protection, compliance and strategic risks. Automated compliance checks significantly reduce liability risk.
- Procurement optimisation: Centralised recording of all requirements allows licences and services to be strategically bundled and optimal contract terms to be negotiated.
- Cost control and budget planning: Complete transparency of AI costs enables precise budget planning and ROI analyses. Cost centres can be assigned to individual departments and usage limits defined.
Relevance and limitations of the AI hub
The approach outlined here allows organisations with a wide range of technical implementations to gain initial experience with the productive use of generative AI and offers employees a central point of contact. It can also simplify administration and reduce the risk of shadow AI.
However, in addition to these advantages, this approach also has important limitations that need to be highlighted:
- It does not eliminate the need for an overarching AI strategy and the corresponding fundamental governance issues. Fundamental considerations regarding the use of AI in the company and the possibility of process transformation are not part of the approach and should be addressed separately.
- The restriction to individual use cases must be appropriate for the organisation. If comprehensive solutions such as CoPilot or AI agents that are already deeply integrated into applications and systems are already in use, the hub approach is not very effective. In order to achieve the necessary user acceptance, a preliminary potential analysis should also be carried out to determine which use cases are relevant.
- The approach is primarily aimed at organisations that are still in the early stages of their AI transformation. Even though the AI hub makes it easy to get started, its introduction should definitely be accompanied by appropriate change management measures.
Conclusion: AI governance as a strategic competitive advantage?
The AI hub model shows that effective AI governance does not have to hinder innovation, but can actually enable it in a targeted manner. For C-level executives, this framework offers the opportunity to harness the transformative power of AI without losing control over costs, risks and strategic dependencies.
The key advantage lies in the combination of centralised control and operational flexibility: while governance processes minimise risks and make costs transparent, the flexible backend architecture enables rapid adaptation to new technologies and market developments.
Organisations that invest in well-thought-out AI governance today are creating a sustainable competitive advantage for themselves. They can implement AI innovations faster and more securely than competitors who are still struggling with unregulated AI landscapes. In a world where AI is increasingly becoming a differentiating factor, professional AI governance is not a nice-to-have, but a strategic must.
If you would like to learn more, explore AI governance for your organisation in general or the relevance of this solution in particular, please contact our experts at ELCA Advisory. We look forward to hearing from you.
Deep Dive: Why AI governance is important
The current situation in many organisations is highly problematic from a governance perspective. Our experience shows three key risk areas that are of strategic importance for corporate management.
- Uncontrolled cost development is the first critical risk. Without centralised control, employees use a wide variety of AI tools independently, often via personal accounts or shadow IT solutions. This leads to completely opaque costs and makes strategic budget planning considerably more difficult. For CFOs, AI thus becomes an incalculable cost factor that is beyond their control.
- Compliance and data protection risks are the second major source of danger. Employees operate in a regulatory vacuum and act without clear guidelines or an understanding of the legal implications of their actions. Sensitive company data is transferred to external AI providers without taking data protection regulations or compliance requirements into account. This can lead to significant legal consequences, fines and lasting damage to reputation.
- The third risk lies in the emergence of strategic dependencies. The unplanned use of various AI services creates a ‘proliferation of tools,’ resulting in uncontrolled dependencies on external providers. This situation complicates both strategic management and the development of possible exit strategies and can significantly restrict the organisation's freedom of action in the long term.
Sources
1: https://www.bitkom.org/sites/main/files/2024-10/241016-bitkom-charts-ki.pdf; https://www.zhaw.ch/storage/psychologie/upload/iap/studie/8._IAP_Studie_-_Generative_KI_bei_der_Arbeit.pdf
2: https://www.mediachange.ch/media//pdf/publications/AI_ResultsReport_de_final_V2_.pdf
3: https://www.zhaw.ch/storage/psychologie/upload/iap/studie/8._IAP_Studie_-_Generative_KI_bei_der_Arbeit.pdf