A technology game-changer AI is here! OpenAI has successfully unveiled ChatGPT, which created a buzz worldwide and is being speculated as a game changer and, at the same time, a risk to some jobs. From logistics to copywriting, AI is changing the way we work. This is the reason why Venture Capital is spending billions on AI firms. ChatGPT uses an advanced artificial intelligence algorithm that generates human-like text in real time and is the latest addition to OpenAI Inc.’s suite of machine learning-driven language models.
This has been particularly designed to generate text conversationally, it is designed to have conversations with people just like people so that it can be used like a chatbot & virtual assistant. Its massive database allows it to learn the complexities of natural languages. It is seen that it becomes difficult to distinguish its text from human written texts. So will it threaten the jobs at ed-tech companies? Let's discuss this in this article.
Which jobs are threatened by ChatGPT?
Here are some jobs that are considered to be replaceable with ChatGPT:
A copywriter is a professional who writes concise & clear copy for ads, social media and marketing campaigns. This job requires creative thinking, strong communication and problem-solving skills etc. So, those who write texts for ads, marketing materials and social media must pay attention towards the potential of ChatGPT & what it can do.
Consider an example: If you want to tell people about one of the best copywriting tools, say Grammarly (plagiarism detector), then Chat GPT will make this task easier for you. Just type Grammarly as a productivity tool in ChatGPT, and this is what you will get;
However, a drawback of AI is that it can guide you or provide information on such copywriting tools but cannot do the job of such copywriting tools, if you are trying to find the plagiarism percentage in an article, it can’t show the source or per cent of plagiarism.
Now, the question is can Google detect AI-generated content? Yes, This AI-altered plagiarized content used for writing articles etc., might be detected by the algorithms of Google or Facebook.
2. Content Writing
Content writing jobs are in high demand these days, especially for ed-tech companies. A content writer is a professional who can write informative and engaging content on any topic, which helps in branding. They have a versatile profile. Now, ChatGPT is considered a threat to this kind of job, but why? Take a look at this example.
Consider an example: If you are asked to write a blog on examples of cyberfrauds in India within 1000 words, just type this line on ChatGPT.
However, such an AI has a drawback it cannot provide every peculiar data; say, if you type show recent cases on IPC, it won't be able to provide you with the details.
3. Customer Support
Customer support jobs are aimed at solving customer queries through e-mails primarily. But, ChatGPT can also do the same by providing customer support through emails and messages and, at the same time giving “empathetic help” to make customers feel like they’re talking with a human being and not an AI.
Consider an example, if you write a customer support email for a client with a problem, say, Refer to some top autonomous law colleges in India which considers CLAT score. This is what you will get;
However, the problem with AI in such a case would be that it cannot provide answers to product-specific questions of a company. If a client asks about the validity and other details of the CLAT PG 2024 course of Finology, AI will not be acquainted with such information.
We believe that programmers won’t deny the fact that ChatGPT could give an opportunity to automate some hefty coding tasks. However, such an AI can threaten entry-level programmers' jobs. ChatGPT has the potential to debug code, solve coding problems, translate codes from one to another programming language etc.
Consider an example: If you ask ChatGPT, that write a python coding for a hangman
Will ChatGPT really risk jobs?
It can be said that ChatGPT can put the jobs of coders, writers, and journalists at risk. But don’t forget that it’s not the human mind; it’s ultimately a machine; it can generate text from available databases, but it cannot create new information, and there can be inaccuracies as well. AI can provide completely incorrect information and present it as facts.
Say a real incident when some researchers at Stanford university provided unfinished content related to Muslims in GPT-3; the results were anti-muslim and had the potential to hurt many users.