What is ChatGPT, and is it safe to use?

ChatGPT has stolen headlines since it was launched for public use in late 2022. The tool, which is an Artificial Intelligence (AI)-powered chatbot, can convincingly write almost anything based on a limited brief. 

The impact on consumers in the future could be huge, with improved search engines, customer service and even product recommendations just some of the potential future uses of this nascent, yet remarkably advanced technology. However, while its responses are convincing at first glance, the accuracy of its ‘confidently incorrect’ responses have highlighted significant concerns around how this type of tool might be used and abused. 

With the UK regulator, the CMA, announcing a review of the AI market, the pace at which these tools are developing is clearly under scrutiny. Read on to find out more about ChatGPT and similar tools, how they work, and whether you should be concerned. 

What is ChatGPT?

ChatGPT was developed by AI firm OpenAI. OpenAI is partly owned by Microsoft as well as several other investment firms.

It’s a chatbot that responds to almost any prompt, be it a question or command, in convincingly legible prose. GPT stands for Generative Pre-trained Transformer, which means it’s a tool that can generate responses based on what it’s already learned. It is a paid-for tool but there is a free version that you can use if the service isn’t too busy.

ChatGPT isn’t the only chatbot that works in this way, but it’s the one that’s gained the most attention in recent months. 

Where does ChatGPT get its information?

ChatGPT uses a collection of Large Language Models (LLMs) which are numbered according to how advanced they are - the free web version is currently using GPT-3.5. These are based on all sorts of sources including the web, books, social media and more. The resulting language dataset comprises hundreds of billions of words. 

The free version of ChatGPT is based on data collection that finished in early 2022, so it does not 'know' anything about the world after that time. There’s also a premium version called ChatGPT Plus (which costs $20 a month) that’s furnished with more up-to-date information from GPT-4. 

undefined

A ChatGPT-like tool is also available to people who use Microsoft’s Bing search engine. The responses to your queries are based both on the pre-learned database of GPT-4, but combined with more up-to-the-minute information about the world right now, and includes clickable citations. Google has a similar chat-based search tool called Bard but it’s only available to those with an invite.

Stay secure: Read our guide on 

Can I trust the information I get from ChatGPT?

In short, no. But in the same way as you might use different sources to kickstart a research project or better understand what people are saying about a topic, ChatGPT and similar tools can be used in a way that can help get you started and find information that you weren’t aware of. The main thing here is to not use a chatbot as your primary source for information, but instead take the answers it gives you and pursue them until you have found the real facts. 

It’s important to understand how ChatGPT comes up with its answers. Crudely put, ChatGPT is very good at placing one word after another. It can do this because it’s 'learned' so much from the massive data gathering exercises that form the basis of the model that powers it. As such, it does not 'know' anything at all; all it can do is put words one after another that make sense.

It does not know how to communicate a level of confidence in what it has written, and if you attempt to probe into how it knows what it’s told you – for instance by asking for a list of citations – it will simply produce a list of things that look like citations but may not actually be real references at all. 

Using a chatbot

Also keep in mind that the sources of its language prediction includes social media platforms, such as Reddit and Twitter. While these are fantastic resources for getting to grips with how people speak to each other conversationally, a portion of online content written by people is going to be false, misleading and even harmful. Since some of what ChatGPT produces is based on what is on these sites and other similar platforms, this should give you some idea of what level of trust you should give it. 

Stay safe online with 

What does ChatGPT do with the information I enter into it?

Ask ChatGPT this question and it tells you that no information about what you enter into it is stored. This is indeed OpenAI’s policy, but the UK’s National Cyber Security Council (NCSC) still points out that you should not enter sensitive information (such as personal details or company intellectual property) into chatbots, and not to perform queries that could be problematic if made public (for example sharing your secrets and asking ChatGPT to solve a personal dilemma). 

As generative AI tools become more prevalent (see below) and are used by companies for specific customer service purposes, the data you enter could be stored under the T&Cs of the companies you’re communicating with. All well and good, but when GPT-powered chatbots have the ability to misunderstand, there is always a chance that the data held about you by these companies is incorrect, which could be a breach of the General Data Protection Regulations. 

How and where are ChatGPT and other generative AI being used?

Anyone can use ChatGPT for themselves - visit the website, sign up and start experimenting. Just bear in mind the limitations given above on how it works and can be used. ChatGPT is a language model, so it can’t generate art or images like some AI engines. However, it can in theory process images and make recommendations. For example, it could scan an image of what’s left in your fridge and then recommend a recipe for dinner. 

As covered above, the highest profile public use of ChatGPT so far is in Microsoft’s Bing search engine and Edge browser. This follows Microsoft’s multi-billion dollar investment in the technology. Bing Chat enables you to ask questions and look up terms, just as you would with normal web search. It will - in theory - give a more human and useful response. 

undefined

Various companies are experimenting with how to use ChatGPT in their services - for example, giving more personalised recommendations on retail websites. The bulk of uses though are behind the scenes, often using the tool to process vast amounts of data to do anything from improve efficiency to combat fraud. 

There are practical ways you can use it, too. Something ChatGPT does very well is explain things in clear and simple terms, so if you’re looking to get a message across – for example a York student who had a parking fine overturned after using ChatGPT to lay out the details – it can be a great way to do this. You could also use it as a starting point for broaching a tricky issue, such as a dispute with a neighbour. Just remember that the information ChatGPT generates should be treated as ‘research’, or as a guide, and in most cases should still be checked and validated before it is shared or used

On the negative side, there are various ways in which this technology can be used to create convincing content that is either fake, misleading or even used for scams. It’s still early days, but evidence has already been uncovered of huge numbers of ‘content farms’ using entirely AI-generated content to lure people to the site and gain as much revenue as possible from advertising. Content farms have always existed, of course, but without real human intervention it is now possible to produce convincing text on an enormous scale at very low cost. 

Is there any regulation for AI like ChatGPT?

There is no specific regulation for generative AI tools like ChatGPT, but other laws could well be applied to the responses it produces. 

For example, if ChatGPT generates text that is largely similar to a copyrighted source, it could be in breach of copyright law. In addition, there are already examples of ChatGPT defaming individuals, stating they were involved in crimes they did not commit. This could result in legal action for libel. 

There are undoubtedly many untested cases of where a chatbot could be in breach of the law. For example, if a retailer were to employ the services of a chatbot to provide recommendations for a washing machine, and then that machine was not fit for purpose, there could be a consumer law case to answer. 

AI like chatGPT has to obey data protection rules on the use and storage of personal data. Indeed, Italy’s data protection regulator was quick off the mark to ban ChatGPT under data protection grounds to prevent personal data being stored and then shared back to other users. The ban was subsequently lifted after OpenAI “addressed or clarified” the issues that the regulator had raised.



source https://www.which.co.uk/news/article/what-is-chatgpt-and-is-it-safe-to-use-aF0Ba4j5xAmr
Post a Comment (0)
Previous Post Next Post