Artificial intelligence has long been a reality in Switzerland: almost half of the population already uses AI tools – among younger people, the figure is over 80%.1 But while private use is booming, companies are lagging behind: only a fraction of them have clear AI strategies. The biggest risk? Shadow AI. More than half of employees worldwide use unauthorised AI applications – often with sensitive data. This shadow usage is also growing rapidly in Switzerland, jeopardising compliance and security. Companies must act now to balance innovation and governance.
AI caught between innovation, risk and costs
This challenge is exacerbated by the extraordinarily dynamic development of the AI landscape. New models, services and providers are emerging at a pace that overwhelms traditional IT governance processes. At the same time, this puts even more pressure on IT budgets, while the scope for investment remains limited. The following challenges arise in particular in the conflict between innovation, risk and costs:
Employees come up with new ideas for the use of AI applications almost daily and want to use them as they already do in their private lives.
Managers do not want to miss out on the strategic connection to this key technology. They receive offers from companies that appear very attractive at first glance and are often overwhelmed when it comes to making a critical assessment.
IT organisations are often not yet ready for the secure operation and integration of AI applications. The challenge: for decision-makers, the main challenge now is to find a balance in the ‘magic triangle’ of innovation, risk and cost. They need solutions that both unleash the innovative power of their organisation and make the associated risks and costs manageable. The key lies in well-thought-out AI governance that combines flexibility and control.
Key risks associated with the use of generative AI
The use of generative AI often entails a number of risks that are not always apparent to everyone and are rarely actively managed:
- Shadow AI: Unregulated use by employees leads to loss of control and security risks
- Uncontrolled costs: A lack of rules for licences, token usage and cloud contracts can blow budgets
- Compliance and data protection risks: Unclear or missing guidelines to prevent the transfer of sensitive data to external providers
- Strategic dependencies: The uncoordinated use of many tools creates dependencies that are difficult to control
Strategic diversity of options and investment security through backend flexibility
The technical architecture enables the gradual implementation of various AI services, thereby creating strategic diversity of options for corporate management. Middleware mediates between the use cases in the hub and the effective technical implementation of generative AI. Cloud-based solutions offer the possibility of rapid integration of commercial services and enable an immediate start of production with proven technologies. Hosted models – known as model-as-a-service offerings – expand the spectrum with specialised cloud offerings that can be optimised for specific use cases while providing greater control over data processing and compliance. On-premise solutions, on the other hand, represent the maximum variant, in which proprietary models are used for maximum data control and compliance requirements.

This architectural flexibility allows AI investments to be scaled gradually while pursuing different sourcing strategies. Depending on the development of usage requirements and available budgets, investment decisions can be made in a targeted manner without rendering previous investments obsolete.
Governance mechanisms for management
The framework implements several levels of governance that are directly relevant to corporate management:
- Strategic management: The hub concept makes it possible to centrally evaluate new AI technologies and integrate them gradually without disrupting existing processes.
- Risk management: Systematic evaluation and categorisation of all use cases in terms of data protection, compliance and strategic risks. Automated compliance checks significantly reduce liability risk.
- Procurement optimisation: Centralised recording of all requirements allows licences and services to be strategically bundled and optimal contract terms to be negotiated.
- Cost control and budget planning: Complete transparency of AI costs enables precise budget planning and ROI analyses. Cost centres can be assigned to individual departments and usage limits defined.
Relevance and limitations of the central AI services platform
The approach outlined here allows organisations with a wide range of technical implementations to gain initial experience with the productive use of generative AI and offers employees a central point of contact. It can also simplify administration and reduce the risk of shadow AI.
However, in addition to these advantages, this approach also has important limitations that need to be highlighted:
- It does not eliminate the need for an overarching AI strategy and the corresponding fundamental governance issues. Fundamental considerations regarding the use of AI in the company and the possibility of process transformation are not part of the approach and should be addressed separately.
- The restriction to individual use cases must be appropriate for the organisation. If comprehensive solutions such as CoPilot or AI agents that are already deeply integrated into applications and systems are already in use, the hub approach is not very effective. In order to achieve the necessary user acceptance, a preliminary potential analysis should also be carried out to determine which use cases are relevant.
- The approach is primarily aimed at organisations that are still in the early stages of their AI transformation. Even though the central AI services platform makes it easy to get started, its introduction should definitely be accompanied by appropriate change management measures.
Conclusion: AI governance as a strategic competitive advantage?
The central AI services platform model shows that effective AI governance does not have to hinder innovation, but can actually enable it in a targeted manner. For C-level executives, this framework offers the opportunity to harness the transformative power of AI without losing control over costs, risks and strategic dependencies.
The key advantage lies in the combination of centralised control and operational flexibility: while governance processes minimise risks and make costs transparent, the flexible backend architecture enables rapid adaptation to new technologies and market developments.
Organisations that invest in well-thought-out AI governance today are creating a sustainable competitive advantage for themselves. They can implement AI innovations faster and more securely than competitors who are still struggling with unregulated AI landscapes. In a world where AI is increasingly becoming a differentiating factor, professional AI governance is not a nice-to-have, but a strategic must.
If you would like to learn more, explore AI governance for your organisation in general or the relevance of this solution in particular, please contact our experts at ELCA Advisory. We look forward to hearing from you.
Deep Dive: Why AI governance is important
The current situation in many organisations is highly problematic from a governance perspective.
Our experience shows three key risk areas that are of strategic importance for corporate management:
- Uncontrolled cost development is the first critical risk. Without centralised control, employees use a wide variety of AI tools independently, often via personal accounts or shadow IT solutions. This leads to completely opaque costs and makes strategic budget planning considerably more difficult. For CFOs, AI thus becomes an incalculable cost factor that is beyond their control.
- Compliance and data protection risks are the second major source of danger. Employees operate in a regulatory vacuum and act without clear guidelines or an understanding of the legal implications of their actions. Sensitive company data is transferred to external AI providers without taking data protection regulations or compliance requirements into account. This can lead to significant legal consequences, fines and lasting damage to reputation.
- The third risk lies in the emergence of strategic dependencies. The unplanned use of various AI services creates a ‘proliferation of tools,’ resulting in uncontrolled dependencies on external providers. This situation complicates both strategic management and the development of possible exit strategies and can significantly restrict the organisation's freedom of action in the long term.
Meet the author : Nadine TSCHICHOLD-GÜRMAN
Practice Leader Public Sector & Professional Services
Nadine Tschichold leads our Public Sector & Professional Services practices. She is committed to driving innovation and digital transformation within the public sector in Switzerland, covering the federal government, cantons and municipalities. Prior to joining ELCA, Nadine led the build-up of the project management office at MeteoSwiss, where he guided and managed numerous projects, including cross-organizational projects in collaboration with various federal offices. Nadine started her professional career in a large consulting company after her masters in Computer Science and a PhD in Neural Networks and AI, both at ETH Zürich.
Meet the author : Nicolas Zahn
Senior Manager - ELCA Advisory
Nicolas Zahn is a Senior Manager in our Zürich office. Nicolas holds a Master of Arts in International Affairs from the Graduate Institute Geneva. During a fellowship program, he dealt intensively with digital transformation in the public sector and completed work stays at the OECD, Singapore, Estonia and a German think tank. At ELCA, his focus is on consulting for digital transformation including AI, covering various aspects from the development of a corresponding strategy, strategic alignment of business and IT, to the management of the implementation of the developed strategies and concepts.
