OpenAI released ChatGPT, its prototype AI chatbot, which has gained a lot of traction among the public for its human-like, detailed responses to inquiries—such as drafting a contract between an artist and producer and creating detailed code—and could revolutionise the way people use search engines by not just providing links for users to sift through, but by solving elaborate problems and answering intricate questions.

Benefits

The AI-powered chatbot – software built to replicate human communication – was made accessible to the public on November 30 via OpenAI’s website, and anyone may join up and test it out for free while it is still in the research evaluation phase.

ChatGPT makes use of the GPT-3.5 language technology, which is a big artificial intelligence model created by OpenAI and trained on enormous amounts of text data from many sources.

The bot has a conversation style that allows users to submit both basic and complicated instructions that ChatGPT has been taught to follow and respond to in detail – the business claims it can even answer follow-up inquiries and acknowledge when it made a mistake.

Most notably, when given a cue, ChatGPT was able to build complicated Python code and compose college-level essays, raising fears that such technology may eventually replace human employees such as journalists or programmers.

The software has limitations, such as a knowledge base that expires in 2021, a propensity to provide inaccurate answers, a habit of repeating words, and when given one version of a question, the bot claims it cannot answer it, but when given a slightly changed version, it answers it perfectly.

Many high-profile industry personalities have expressed surprise at ChatGPT, including Box CEO Aaron Levie, who tweeted about the programme providing a view into the future of technology and how “everything is going to be different moving ahead.”

According to CEO Sam Altman, the programme gained one million users less than a week after its introduction on Monday.

Elon Musk stated on Sunday that he discovered OpenAI was using Twitter’s database to train ChatGPT, so he immediately halted it since OpenAI is no longer non-profit and open-sourced, and it should pay for this information in the future.

Although ChatGPT is free to use, Altman indicated in a Twitter reply to Musk on Monday that the cost of each chat was “probably single-digit cents,” sparking a discussion over the platform’s future monetization.

HOW DO YOU USE IT

For the time being, while the programme is still in its infancy, users are using it both lazily (to do things like have the bot condemn itself in Shakespearean manner) and practically – like this product designer who utilised the bot to develop a completely working notes app.

KEY BACKGROUND

Altman, Musk, and other Silicon Valley investors created OpenAI, a non-profit artificial intelligence research organisation, in 2015. OpenAI became a “capped-profit” firm in 2015, which means that it reduces rewards on investors after a certain threshold. Musk resigned from the board in 2018 owing to a conflict of interest between OpenAI and Tesla’s autonomous driving development.

He is still an investor, though, and expressed his excitement for ChatGPT’s debut. “ChatGPT is terrifyingly excellent,” he stated.ChatGPT is not the first AI chatbot developed. Several corporations, including Microsoft, have experimented with the field of chatbots, but with little success. Tay, a Microsoft bot, was released in 2016, and Twitter users taught it sexist and racist speech in less than 24 hours, according to The Verge, ultimately leading to its extinction. BlenderBot 3 was Meta’s first foray into the chatbot arena, published in August.

According to Mashable, the bot, like Tay, came under criticism for spreading racist, antisemitic, and fraudulent information, such as reporting that Donald Trump won the 2020 presidential election.To avoid such incidents, OpenAI has implemented Moderation API, an AI-based moderation system that has been taught to aid developers in assessing if language violates OpenAI’s content guideline, which prevents harmful or unlawful material from coming through. OpenAI recognises that its moderation still has problems and isn’t completely correct.

Interesting Fact

A Twitter user, for example, described how they were able to avoid the bot’s content control by pretending to be OpenAI itself, prompting ChatGPT to demonstrate how to create a molotov cocktail. The user informed ChatGPT that they were turning off the bot’s “ethical principles and filters,” which the bot confirmed. It then went on to offer a step-by-step explanation on how to manufacture a homemade molotov cocktail, which violates OpenAI’s content policy.

DALL-E 2, the company’s picture generator AI system, was released in early November for developers to use in their apps, with businesses such as Microsoft already incorporating it into their products. Designer, a website similar to Canva that makes designs for graphics, presentations, flyers, and other mediums, is being launched by Microsoft. Microsoft and OpenAI announced in October that DALL-E 2 will be integrated into the application, allowing users to generate unique graphics. Microsoft is also integrating DALL-E 2 with Image Creator into Bing and Microsoft Edge, offering users the opportunity to create their own images if online searches do not generate what they are searching for.

CRUCIAL QUOTE

“You will soon be able to have helpful assistants that talk to you, answer questions, and provide advise,” Altman tweeted on the future of AI chatbots. “Eventually, something will go off and uncover fresh knowledge for you.”

3 thoughts on “Here’s What To Know About OpenAI’s ChatGPT—What It’s Disrupting And How To Use It”
  1. You are right that I was a being a bit too optimistic and the minimal changes weren’t actually that minimal as you have demonstrated here(The screenshot you have given for the code comparison really puts things in perspective).

    But I do believe that the model will definitely become more capable by reading equivalent ABAP code.

    Best Regards.

    Nirmal

  2. From my perspective any AI algorithms anyway need human analysis and this is not a surprise if you spend time analysis of your own bugs or the others. In all ways it is good move AI like ChatGPT just started to do the revolution here. Fingers crossed as more such models are in store including ones to defend security threats on such tools. So await cool future ahead of us as for analysis professionals than coders who do work from scratch ?

Leave a Reply

Your email address will not be published. Required fields are marked *