AI and cybersecurity: 3 notes worth knowing

Artificial intelligence is now as natural a part of studying and work as searching for information online. It can speed up research, help with drafting text, or offer a new perspective. At the same time, however, it also raises (cyber)security questions: some of them are complex, while others can already be answered today.

18 Mar 2026 Pavel Brejcha Natalia Peterková

1. Which AI tools should you use?

For everyday tasks such as brainstorming, summarizing publicly available text, or finding inspiration for your studies, you can use whichever tool suits you best. In these cases, the specific service you choose is not crucial. What matters more is something else: do not accept AI outputs without checking them, and when studying, follow the rules set by your instructor, course, or faculty. Masaryk University supports the use of AI in education, while also emphasizing its ethical, transparent, and secure use. The situation changes when you enter more sensitive, internal, or work-related information into an AI tool, such as personal data, login credentials, or research data (MU’s data classification can be found here).

In such cases, it is essential to use only tools whose data handling is contractually covered in relation to MU.

No description

DeepSeek: why do we recommend caution?

In July 2025, NÚKIB, the Czech National Cyber and Information Security Agency, warned against certain products developed by DeepSeek. For ordinary users, this does not constitute a blanket ban, but it is a clear reason for caution and for avoiding the entry of sensitive content into these tools. At MU, we therefore recommend not using the public version of DeepSeek and instead giving preference to approved university services. You can find more about the issue of using DeepSeek in our warning.

What do we recommend in practice?

At MU, we therefore recommend using mainly these two tools for more sensitive work:

  • Microsoft Copilot Chat – the recommended AI tool for MUNI users. After signing in with your university account, you should see a green shield indicating that your data is not used to train the models and that its processing complies with contractual terms.
  • Google Gemini – available at MU through Google Workspace after activation in IS MU. Here as well, make sure that you are signed in with your university account.

For more advanced use in teaching, science, and research, e-INFRA AI models are also available at MU in the AI-as-a-Service mode. This is a package of AI tools operated directly within the MU environment. The data you enter remains within the university’s infrastructure and is not sent to public AI services outside MU. This makes it suitable for more sensitive work and for more technically advanced users.

2. How can you secure your AI tools?

People often do not use AI services just once, but over a longer period of time. As a result, their accounts gradually accumulate conversation history, work and study topics, ideas, personal notes, and sometimes even very sensitive content. This is especially important because, according to Harvard Business Review, the most common use of generative AI in 2025 was therapy. If someone else were to gain access to such an account, they would not just see a single chat, but could build a fairly accurate picture of what the user is dealing with, how they think, what stage of life they are in, what is troubling them, or what they tend to confide in.

No description

This is precisely why a long-term conversation history can reveal far more than a single document or email. In fact, OpenAI launched ChatGPT Health this year as a separate space for health-related questions with enhanced privacy protection, which itself shows that some types of conversations require a higher level of protection. In its reports, the company also states that around 0.15% of weekly active users engage in conversations showing explicit signs of possible suicidal planning or intent.

What to watch out for: Do not grant AI tools permissions unless you are sure why they need them and what they will do with them. At the same time, do not install unverified add-ons, extensions, or external AI tools, especially if they request access to your chats, documents, email, or browser. An overview of specific risky situations can be found in the expandable box below.

If an AI account may contain this kind of sensitive information, its security needs to reflect that: the foundation is a strong passphrase, ideally stored in a password manager, and multi-factor authentication enabled on your account.

What risky situations can arise when using unverified AI tools?

Attack
Example
Impact
What should you do?


Overly broad permissions granted to an AI tool or agent

An AI agent is given access to email, documents, calendars, or files even though it does not need them for the task.


The tool may then read, modify, or delete data unrelated to the task.

Grant only the minimum permissions necessary. If the tool does not need access to your email, documents, or calendar, do not allow it.

 

Malicious browser extension

The extension appears to be an AI assistant, but it collects chat history and browsing data. This year, Microsoft described malicious AI extensions that gathered full URLs as well as chat content from platforms such as ChatGPT and DeepSeek.

 

Theft of chats, internal procedures, code, or other sensitive information.


Install only verified extensions from trusted publishers. Before installing, check the permissions and the reason why the extension needs them.


Prompt injection through an extension or web content

An extension or malicious content on a webpage can read or alter the prompt and the AI’s response. LayerX describes this as a “man-in-the-prompt” attack carried out through a browser extension.


The AI may then work with a manipulated prompt, disclose data, or return an output you did not intend.


Be very cautious with browser extensions and sidebar tools. Even a trusted environment can be weakened by what is running inside it.

A “free” and especially an unverified AI tool
The service has an unclear operator, confusing terms, and requests access to more data than would be necessary.

You do not know how the data is handled, who has access to it, or whether it may end up outside the environment you expect.

For work-related purposes, use only official and approved services, ideally tools recommended by MU or operated within the university environment.

What do we recommend in practice?

Enable multi-factor authentication for the AI tools you use. For some tools, MFA is configured directly in the service account; for others, it is managed through the account you use to sign in.

  • Microsoft Copilot: For a school or work account, multi-factor authentication is configured as part of Masaryk University’s single sign-on.
  • Google Gemini: For Gemini, MFA is handled through your Google account by enabling 2-Step Verification. Google officially describes the process in your account settings under Security / How you sign in to Google.
  • ChatGPT / OpenAI: In ChatGPT, open Settings → Security → Multi-factor authentication and complete the setup by following the instructions. OpenAI also states that passkeys are supported.
  • Grok / xAI: xAI officially states that users can add MFA through xAI Accounts and that recovery codes are available on the Security page. Their documentation also explicitly recommends enabling MFA.
  • Claude / Anthropic: For Claude, protect above all the email account you use to sign in. If you use Gmail, enable 2-Step Verification on your Google account.
  • Perplexity: For Perplexity, secure primarily your Google or Apple account that you use to sign in, and enable MFA on that account.

3. How do attackers use AI?

Today, AI is also helping attackers. It significantly makes it easier for them to prepare phishing emails and other social engineering scenarios. Wondering what the biggest threat is? What once required time, experience, and technical skill can now be created faster, more cheaply, and in a much more convincing form. AI can generate personalized phishing messages at scale, without language mistakes and with a high degree of credibility. According to Harvard Business Review, 60% of participants fell for AI-automated phishing, which is more than with conventional phishing attacks.

No description

At the same time, it is no longer just about writing fraudulent messages. Microsoft warns that attackers are now deploying generative AI across different stages of an attack — from target reconnaissance and phishing preparation to malware development and the processing of stolen data after a system has been compromised. AI is therefore not used only to make attacks look more convincing, but also to make them faster, cheaper, and easier to scale. The speed of this development was also illustrated by a recent Czech test: at the start, the Claude model was given a single instruction to breach a corporate network, and within 21 hours, without any further guidance, it had fully taken control of a simulated corporate environment (that is, all computers, all data, all employee accounts, and so on).

In addition to emails, voice phishing is also on the rise — fraudulent phone calls and voice messages that may use AI-generated voices. That this is not a marginal phenomenon is also shown by cases from the Czech Republic: as early as June 2025, experts and the police warned of a wave of scam calls in which attackers used AI to imitate the voice of a loved one and asked for money or sensitive information. At the beginning of February 2026, Czech Radio also warned again about the same risk in connection with deepfake voice scams.

For users, this means one thing: it is necessary to expect that attacks will become increasingly convincing. A suspicious message no longer has to be full of mistakes or seem amateurish. It may appear to be legitimate communication from a lecturer, colleague, study office, or university service.

How do State-Sponsored Groups abuse AI?

In its Threat Landscape 2025, the European Union Agency for Cybersecurity (ENISA) also warns that state-sponsored groups and cybercriminal actors are increasingly using AI to boost productivity and optimize their malicious activities. That is also why it is important to use AI tools with caution and to remember that working with them has a security dimension as well. We covered this topic in more detail in our article AI as a Friend and Foe in Cybersecurity.

What do we recommend in practice?

The practical takeaway is simple:

  • Even messages that look trustworthy should be verified — check the sender, links, attachments, and the overall context. Be especially cautious with messages that pressure you to respond quickly, open a file, or sign in through a link in an email.
  • Install the TrafficLight browser extension — a tool that alerts you to suspicious and fraudulent websites.

In Conclusion

We are closely monitoring the topic of the secure use of AI and continuously assessing its impact on the university environment. Given the rapid development of both the tools themselves and the related risks, we will continue to focus on this area and provide further recommendations and warnings as the situation evolves. Follow our social media channels (FB/Instagram) or visit our portal directly at security.muni.cz.

This result was supported by the SOCCER project, funded under Grant Agreement No. 101128073, with the support of the European Cybersecurity Competence Center (ECCC).

You are running an old browser version. We recommend updating your browser to its latest version.

More info