How to increase ChatGPT security and protect your data

Last update: April 21
  • Adjusting ChatGPT's data, memory, and history controls significantly reduces the exposure of your information.
  • Choosing between Free/Plus, Teams, Enterprise or API accounts determines the level of privacy and the use of your data to train AI.
  • Properly configuring Custom GPTs and avoiding sharing sensitive data is key to GDPR compliance and protecting personal and corporate information.

Security in ChatGPT

ChatGPT's popularity has exploded, and hardly a day goes by that we don't use it for something: work, study, summarizing documents, or simply out of curiosity. But, as we use it more, We are increasingly sharing more personal and professional information. in the tool, often without stopping to think about what happens to that data afterwards.

If you're concerned about your privacy and want to continue leveraging AI without any surprises, you're in the right place. In this guide, you'll see, step by step, How to increase ChatGPT security and what settings you should adjustWhat type of account is right for you, depending on your situation, and what basic practices you should apply to avoid making mistakes when sharing sensitive information.

How and why ChatGPT uses your data

Data privacy in ChatGPT

To understand what you can do to protect yourself, the first thing is to be clear What does ChatGPT do with the information you enter?OpenAI trains its models with large amounts of text from public websites, open-licensed content, paid data, and human-generated materials.

In addition to all that, on standard accounts (Free, Plus or Pro), Your conversations and files can be used to further train the AIUnless you manually disable that option in the settings. In other words, the questions you ask, the documents you upload, and even your voice and video recordings can all help improve the model.

Although OpenAI claims that We do not use your data to create advertising profiles or marketing campaignsYes, you can use them to fine-tune the system's behavior, increase the accuracy of responses, and improve its security. Once something is included in a training dataset, it's virtually impossible to "remove" it from the model.

Even if you disable the use of your content for training, ChatGPT still collects certain information for technical, legal, and security reasons. This includes Account data (name, email, payment), usage data (how you use the tool), technical information (IP, browser), and session logsIt is part of the normal operation of the service, but it is worth bearing in mind.

Privacy options within the ChatGPT interface itself

Regardless of your plan, the app includes several settings you can adjust to minimize the exposure of your dataThey are available both in the web version and in the mobile and desktop apps.

Turn off “Improve the model for everyone” and similar options

The key setting is to disable the use of your content to train AI. To do this, Log in to your ChatGPT account and open the settings menu (Click on your name or photo, usually at the bottom left of the website). Within that menu, look for the section "Data controls" within "Account".

In that section you will see several options related to the processing of your data. The most important one is "Improving the model for everyone"If you disable it, OpenAI will stop using your messages and files to train new models, although they will still appear in your history unless you do something about it.

On that same screen you can also Uncheck options such as "Include audio recordings" or "Include your video recordings"If you regularly use ChatGPT with voice or upload videos, disabling these boxes helps to further limit the amount of information the company can use for training.

Use temporary chats for sensitive conversations

When you need to handle particularly sensitive information, the best option is to use the temporary chats (or incognito mode)By activating them, you start a conversation that works much like a private browser window.

Whatever you write there It will not be saved in your history and, according to OpenAI, will not be used to train the models. nor to feed ChatGPT's internal memory. They are "disposable" sessions: it's as if they never existed, except that they can be kept for up to 30 days only to prevent abuse before being deleted.

This mode is ideal when dealing with sensitive work-related issues. documents containing customer data, legal, medical or financial informationEven so, even in temporary mode, it's best not to write overly identifiable information if you can avoid it.

  How to get free Robux in Roblox safely and legally

Control memory and personalization

In addition to history, ChatGPT may save certain “key” data about you to offer more personalized answers in future conversationsFor example, remembering your tastes, your profession, recurring projects, or details you've mentioned several times.

If you don't want the assistant to retrieve information from old chats, go into settings and then to the section on "Personalization". There you will find the option "To refer to stored memories"If you disable it, the system will stop using those memories in new conversations, and it's advisable Understanding the risks of shadow artificial intelligence.

Manage your chat history and delete what you don't want to keep.

By default, ChatGPT displays a sidebar with all your conversations, which you can title for easier access. This is very useful in practice, but It also means that anyone with access to your account will see everything you've asked..

If you share a computer or think someone might access your account, it's advisable delete or archive the most sensitive conversationsWhen you archive them, they disappear from the main list, but you can retrieve them whenever you want; if you delete them, the conversation is deleted and you won't be able to access it again.

If you're looking to start fresh, in the section of Under "Data Controls" there is an option to delete all history all at once. This way you reduce the amount of information stored in your account, especially if you've been using the service for months or years.

Review and remove shared links

ChatGPT allows you to generate links for share full conversations with other peopleIt's convenient when you want to show a thread of questions to a colleague or client, but it also has its risks.

Over time, it's easy to forget which links you generated and who you shared them with. That's why it's a good idea to check in periodically. Settings → Data controls → Manage shared linksThere you can see a list of the links that are still active and delete them so that Those conversations are no longer accessible from outside..

ChatGPT account types and security differences

Not all ChatGPT accounts offer the same level of protection by default. The option you choose greatly influences how your data is handled.especially if you work in a company or handle regulated information (healthcare, finance, education, etc.).

Free and Plus: personal options with manual settings

Version Free It's ChatGPT's free plan, with access to a limited model, somewhat slower and subject to availability based on demand. Plus It's the paid subscription (about $20 a month) that gives priority in the queue, faster responses, and access to more powerful models like GPT-4 or GPT-4o, plus advanced features like file, voice, or video analysis.

In both modes, By default, your conversations can be used to train the AI.You are the one who needs to go to "Data Controls" and disable "Improve the model for everyone" and the audio/video usage options if you want more privacy.

Paying Plus does not automatically mean more protection. The improvement lies in the quality of the model and the extra features.not in the processing of your data. From a privacy perspective, the key remains configuring your account properly and not sharing information you shouldn't.

ChatGPT Teams: for SMEs and teams that need more control

ChatGPT Teams is designed for Small and medium-sized teams that want to collaborate with AI without taking so many risks as in personal accounts. It offers full access to advanced models, without the usage limitations of Plus, and adds features geared towards group work.

In terms of security, the difference is significant: OpenAI does not, by contract, use Teams conversations to train its modelsThe files you upload, internal instructions, and team content remain within the client's environment, which is crucial if you handle corporate documents.

The cost is usually slightly higher than Plus per person, but in return you get greater control over collaboration and more robust data protection, without needing to make the leap to the Enterprise level.

  Aviary photo editor, the most famous photo editor.

ChatGPT Enterprise: maximum security and regulatory compliance

For large companies, regulated sectors, or environments with strict legal requirements, the right option is Chat GPT EnterpriseHere we are talking about a product geared towards the corporate sector, with specific data protection and compliance agreements (for example, SOC 2, GDPR and similar).

This plan offers Advanced encryption, configurable data retention, large-scale user management, and integration with internal systemsAnd, just like in Teams, the data is not used to train OpenAI models, something that is contractually stipulated.

Prices vary depending on the size of the organization and specific needs, but they usually fall within a range of tens of dollars per person per month, clearly aimed at companies that need total control over what comes out of and what goes into their systems.

APIs and agents: the path to even more control

If you develop products or automate processes, a very interesting alternative is Use the OpenAI API instead of the standard ChatGPT interfaceThe API allows you to integrate the models into your own applications, websites, and workflows.

The great advantage from a security point of view is that OpenAI does not use data sent via API to train its modelsYou pay per use (tokens) and retain much more control over the information that circulates, being able to decide what is stored, how it is anonymized and for how long.

You can build on top of that API. virtual agents or assistants specialized in specific tasks (customer service, data analysis, report generation, etc.). By implementing appropriate authentication and authorization, you can restrict access to certain functions or databases to only specific individuals.

Custom GPTs: powerful, but risky if configured incorrectly

The Custom GPTs These are customized versions of ChatGPT that you can adapt to your needs: add specific instructions, upload documents, connect external tools… They are very useful, but Improperly configured settings can pose a significant security vulnerability.

How private Custom GPTs work

Each Custom GPT is a kind of independent “instance” within your account. You can create several, each with its own instructions, uploaded files, and objectives: one for customer service, another to help with internal marketing, another for data analysis, etc.

When you mark them as private, Only you can see and use themThey don't appear in the public gallery. From the "My GPTs" section, you can edit their instructions, change attachments, or delete them completely if you no longer need them.

It is also possible in Teams or Enterprise environments share Custom GPTs with other team membersMaintaining control over who can use them. It's a convenient way to distribute internal resources without exposing anything to the outside world.

Public Custom GPTs and Links: The Typical Failure

The problem arises when a Custom GPT containing strategic, internal or confidential documents It is mistakenly left in public or hidden mode with a link. In that case, anyone who finds it (or receives the link) could bombard you with questions until they extract sensitive information from your files.

If, for example, you create a GPT with business plans, customer databases, or internal documentation and publish it thinking that "nobody will see it," you are opening the door for third parties to access it. scrape confidential data without too much troubleThis is one of the most frequent mistakes when starting to play with this feature.

To minimize risks, get into the habit of always checking the visibility settings of your Custom GPTs before uploading important documentation. Private for sensitive materials, hidden or public only when there is nothing that could harm you if it comes out.

Best practices when working with Custom GPTs

If you want to get the most out of these customized versions without compromising security, it's a good idea to follow a few basic guidelines. To begin with, Avoid directly uploading files with highly sensitive information. (customer personal data, medical records, contracts with identifiable data, etc.) to GPTs that may be public or hidden.

Instead, whenever possible, anonymize the information before uploading itRemove names, addresses, ID numbers, phone numbers… and replace them with generic identifiers. That way, even if someone manages to gain access, the damage will be much less.

Additionally, if you have an IT team or developers, it might be more prudent. Connect your internal systems via the API and control which data is sent and which is not, instead of directly uploading sensitive files into a Custom GPT accessible within the ChatGPT interface.

  UTP cable: what it is, how it works, categories and best practices

What information should you never put in ChatGPT

Beyond account settings and types, the best defense is common sense. ChatGPT is not intended as a manager of confidential dataUsing it that way could get you into serious legal and security trouble.

As a general rule, you should avoid any information that allows a specific person to be identified when you have no legal basis or consent to process it in that way. We're talking about ID cards, passport numbers, postal addresses, phone numbers, personal emails, bank details, credit cards, or clinical information.

Copying and pasting is also not a good idea. invoices, contracts, emails, or customer purchase histories Exactly. Even if the intention is to prepare a report or improve the service, you may be violating regulations such as the GDPR or the LOPDGDD if you do not respect the obligations of confidentiality and data minimization.

On a personal level, you also have to be careful. Going uphill, for example, a blood test or a medical report without covering up your identifying information It's a rather reckless way to expose your health and identity to a cloud service, even if you trust that the company respects the law.

Legal framework: GDPR, LOPD and AI Law

In the European Union, and therefore in Spain, the use of artificial intelligence that processes personal data is subject to very clear regulations. Among them, the General Data Protection Regulation (GDPR), the LOPDGDD and the new European Artificial Intelligence Law.

These rules require transparency regarding how data is processed, legal basis for each processing activity, adequate security measures, and respect for the rights of individuals (access, rectification, erasure, objection, etc.). If you use ChatGPT professionally and enter data about clients, employees, or suppliers, all of this directly affects you.

That's why it's essential in companies and organizations. Define clear internal policies on the use of ChatGPT and other AI, train the teams and decide which accounts can be used (Free/Plus for generic topics, Teams or Enterprise for real data, API with own controls, etc.).

It's not just about "doing it because you have to," but about protecting your most sensitive asset: the personal and corporate information you manageA lapse in judgment with an AI tool can end up being a serious reputational problem and lead to penalties.

Extra measures to strengthen your digital security when using ChatGPT

In addition to the tool's own settings, there are a number of basic cybersecurity practices that should always be applied when using ChatGPT, especially if you work from multiple devices or often connect from outside your home.

The first is Avoid using ChatGPT on public or unsecured WiFi networkssuch as those in cafes, airports, or hotels. If you have no other option, use a reliable VPN to encrypt your connection and make sure you're on an HTTPS page before typing anything important, and if you want, Learn how Cloudflare works.

It also helps a lot Periodically review your account activityCheck your chat history, make sure there are no unusual logins, log out of shared devices, and don't save your password in browsers on public or work computers that are accessed by many people.

If you detect suspicious messages or links, strange download requests, or behavior that doesn't seem right to you, Do not interact with them and notify OpenAI's official support channels. or your IT department. And, of course, change your passwords if you believe your account may be compromised.

Finally, check what What extensions, plugins, or applications do you have connected to your ChatGPT account? Your cloud services are already in place. Linking unnecessary tools only increases the attack surface and the possibility of third parties extracting more data than they should.

Adopting good habits, choosing the right account type, and exploring ChatGPT's configuration options make all the difference between careless use and truly secure use. Those who take the time to fine-tune these details and establish a simple internal protocol can unlock the full potential of AI. without handing over critical information or violating regulations.

How to use chatgpt temporary chat and what it's for
Related articles:
ChatGPT temporary chat: what it is, how to use it, and how far it protects your privacy