This page is fully or partially automatically translated.

Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Header of idgard | uniscon GmbH
Forums it-sa Expo Knowledge Forum F

AI security: Sovereign Cloud solutions and secure data processing

Large language models are susceptible to manipulation and data leaks. Secure operation in a sovereign cloud provides a remedy.

calendar_today Tue, 22.10.2024, 10:30 - 10:45

event_available On site

place Forum, Booth 9-443

Action Video

south_east

Action description

south_east

Speaker

south_east

Themes

Cloud Security Data security / DLP / Know-how protection Trend topic

Event

This action is part of the event Forums it-sa Expo

Action Video

grafischer Background
close

This video is available to the it-sa 365 community. 
Please register or log in with your login data.

Action description

The use of large language models (LLMs) offers enormous opportunities for companies, but is also associated with considerable challenges, especially when it comes to processing sensitive data (financial or project data, personal data, etc.). This is because traditional LLMs are susceptible to manipulation and data leaks. One way to counteract these risks is to operate in a highly secure, sovereign cloud.

Challenges for sensitive data when using LLMs

Although LLMs are powerful tools, they harbour risks that should not be neglected, especially when processing sensitive information.

Typical challenges include:
- Unauthorised access to training data or models
- Model manipulation (directly or through manipulation of training data)
- Risk of data leaks during use of the model

Traditional cloud and on-premise approaches quickly reach their limits here, which makes the use of these technologies extremely difficult, especially in highly regulated industries such as finance, law and healthcare.

A sovereign cloud as a solution

The term ‘sovereign cloud’ is often used in a vague way. It refers to a cloud service that is verifiably protected against unauthorised access at all times - whether by external attackers, state actors or the company's own employees with admin rights. Data sovereignty and data integrity must be guaranteed at all times - a task that quite a few providers fail to fulfil.

The idgard cloud service fulfils these requirements thanks to its three-tier security concept. In addition to operation in highly secure German data centres and independent certification, this essentially comprises the patented sealed cloud technology.

This technology uses a confidential computing approach to demonstrably prevent not only external access, but also internal, unauthorised access. This means that data can be reliably protected both during transmission (‘data in transit’) and storage (‘data at rest’) as well as during processing (‘data in use’).

This not only offers the highest level of security when working with sensitive data, but also enables the secure use of AI models without the user having to deal with problems such as key management and other complexities.

Integration of AI in idgard 

The use of AI in a sovereign cloud like idgard starts with the use of freely available open source LLMs such as LLama or Mistral. These models are trained in idgard's sealed environment and customised to the specific needs of our customers. The entire process takes place within idgard, which demonstrably prevents manipulation and data leaks. Even prompts - i.e. the requests to the model - remain protected in this way. 

Possible use cases include, for example, ‘communication’ with documents or summarising information. However, more complex applications, such as assistance functions for digital committee and board communication, are also theoretically conceivable. 

Another possible step in the future is the establishment of a dedicated AI cluster within the Sealed Cloud, which will enable our customers to further develop their own AI applications in a highly secure environment. 

Conclusion: AI security in the cloud thanks to sealing 

The integration of AI systems in sovereign cloud services such as idgard minimises the attack surfaces when dealing with AI. Unauthorised access to models, training data and queries is prevented and the risk of model manipulation is significantly reduced. The unique security architecture of idgard allows LLMs to be operated and further developed securely without compromising on user-friendliness. 




Translated with DeepL.com (free version)
... read more

Downloads

Language: German

Questions and Answers: No

Speaker

show more
close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.