How to Use ChatGPT to Increase Team ProductivitySirisha Peyyeti
Listen on the go!
ChatGPT has taken the world by storm. A simple interface is now being used by everyone and for everything. If you are in IT, I am sure not just your colleagues, but folks from your family that do not belong to the IT world are also talking about it.
Maybe the simplicity of it is making the adoption of ChatGPT so huge. Like with all technological advances, there is enough mistrust in how the whole thing works. To go back to that famous 2018 U.S. Senate hearing by the Judiciary Committee that was dealing with privacy, the CEO of Google, Sundar Pichai was asked if there were people behind the doors, sitting and answering the questions that people ask on Google.
It is not easy to trust what we don’t know and know well enough. The question that everyone seems to be asking is “Is this Ethical?” let us start by defining what “this” is.
What is ChatGPT?
Based on GPT-3.5, OpenAI created the large language model (LLM) chatbot known as ChatGPT. It has a unique ability to converse in conversational dialogue form and provide responses that can look quite human. The task of foretelling the following word in a string of words is carried out by large language models.
ChatGPT learns how to obey instructions and provide replies that are acceptable to people using Reinforcement Learning with Human Feedback (RLHF), an extra training layer.
Is ChatGPT ethical?
ChatGPT is trained on a humongous amount of data, has a built-in reward function to provide answers the way humans like it (basis the training it received from humans like us), and has a policy on top of this so that it gives “appropriate” answers.
Purely as a software program that has no consciousness (that we are aware of, at this point), ChatGPT in itself is not bound by any specific ethics. However, there are certain principles that it adheres to like accuracy of the information, impartiality, transparency, privacy, safety, and accessibility.
While from a technology standpoint, this is a phenomenal success, the onus is on the people (a larger section of society) to use it for the right purposes. The usage will depend on what your moral principles and values are.
That will drive your behavior and decision-making with regard to the usage of ChatGPT. This can be applied in the context of an individual, an organization, or a state.
Let us take a few examples to understand the ethical dilemma that humans are dealing with in the context of ChatGPT.
ChatGPT and the software industry
There are multiple scenarios in which ChatGPT is already being used by IT teams. Some of the possible scenarios include:
- Leveraging ChatGPT to create user stories for specific functionalities
- Using relevant plug-ins or interfaces to create code snippets
- Looking for possible solution approaches
- Assistance with writing unit test cases and documentation
- Review and provide feedback on the code
As you can see, none of these are new. IT teams have always used different sources (search engines, their own code repositories, online forums) to get this sort of support.
So, what is new here? It is just that this is a more precise and quicker way to get to the details one needs. This seems fair and nothing unethical about using ChatGPT and its assistance. But a few pointers to look out for:
- Are we as IT professionals capable of understanding the solution provided and validating if that is relevant for our specific scenario and filling in the blanks (‘coz there will be gaps in the response)?
- Are you as a developer using ChatGPT to simply document the “what” in your code and not really focused on “why”?
- Are you only using this as a shortcut to get the work done without understanding the underlying concepts?
These are some questions that we need to ask ourselves to get our ethical posture right in terms of using the tool/platform. As IT teams and as organizations responsible for putting out platforms that a multitude of people use to transact, the onus is on this community to understand, internalize and use the responses for bringing in efficiencies (but not at the expense of what is appropriate). Maybe start by using the platform as a “pair programming” technique.
ChatGPT and the education sector
Again, multiple benefits. It is as important a tool as any other in the context of the advancement of learning. In this day and age of information proliferation, having a place to go to and create a learning path can be immensely helpful.
It is also an important tool for educators to stay connected with the latest and greatest happenings and build a backlog of items to have a Point of View on.
Use as assistance for completing assignments as well. If it facilitates the context of the advancement of knowledge, it would be counterproductive to restrict its usage.
That said, using it to complete exams might be akin to cheating and will deprive the students of active learning, and critical thinking. Also relying solely on this (or any other platform for learning) will mean a lack of active/interactive collaboration with peers and instructors. Isn’t that a core aspect of “learning”?
While it is still too early to quote numbers, the use of ChatGPT in education can lead to increased productivity by helping students to learn more efficiently and effectively
ChatGPT and creative sector (like advertising)
If you had the option of getting a varied point of view from multiple copywriters before creating an ad, would you not do it as an organization?
If you had the option to cut short the process and automate the possibility of validating if the content is problematic or not, would you not use it?
But then, if you limit the thought process of humans, if you rely solely on programs to generate content, where will newer thoughts, ideas, and contextual/personalized ideas come from? How would we create a corpus of more creative ideas that can be learned from?
As illustrated through the examples, ethics in terms of the usage of ChatGPT or the like will need to be considered in the context of what it is being used for.
ChatGPT enhances the productivity of teams
Organizations are already using it to increase the productivity of teams by delegating certain tasks. Expectations are that productivity will increase by at least 15-20% with a more important metric i.e. employee satisfaction improving as well.
Whether it is using calculators, or using computer software, or using open-source learning tools, industries have always embraced newer technology, albeit the initial mistrust. We have it in us, as a race, to try and understand how these technologies work and use them appropriately.
Change is coming at an unprecedented pace. Shutting down technological progress based on mistrust or unclear ethical positions, might not be an ideal path to take. Like the CEO of OpenAI, Sam Altman himself states, “We need to be open to embracing change and constantly look at and improve consortiums that govern the workings of these AI tools.”