Originally posted on Substack - 14th of July 2023
“Prompting” literally means encouragement and HR should be good at it. Still, the use of generative AI for HR is somewhat in the discovery phase. Despite the unseen global growth of ChatGPT and the recent release of Bard in Europe, the corporate adoption of AI-projects and use is waiting out the errors or minimising the (unknown) risks. The enthusiasm of the board often tempers when there is no business case yet and their people enter unknown territory with little knowledge of the risks, the impact or the skills needed. There are little company policies in place and those who have already are pioneering.
At this point even the founders of OpenAI pulled the emergency brakes. Whilst competitors worked out their own LLM and the population could work on their skills, the European Commission has even worked out a general policy for generative AI.
For now, the growth of this disruptive technology is mainly caused by individual wins and learnings, still the impact on HR could be very efficient and cost-effective but game changing.
Unseen growth of a new technology
Since its release, ChatGPT grew to a user base of 100 million users in only two months. That is unseen! The world was shocked by TikTok, remember? They went to 100 million users in 9 months, before there were any worries about privacy and data protection. In comparison, it took GoogleTranslate 78 months to acquire a 100 million user base.
In Q1 and Q2 of 2023 everybody was talking about ChatGPT and generative AI. Only a minority was really using it for business and IF they were using it for business, it probably was without a policy or real business case.
The main adoption was caused by offering wins and learnings to automate tasks on an individual level and explore the opportunities of generative AI. During this exploration phase at warp speed, several anomalies were discovered in the technology, going from a gender bias to creating people with multiple fingers (or in my case, fingers that are grown together).
It was Textio, an AI-diversity analytics solution, that discovered a severe gender bias in the first version of OpenAI earlier this year. Still, because of the huge adoption, the language model of ChatGPT was capable of learning and being improved with every update.
During this period, even the founders of OpenAI pulled the emergency brakes and warned for the dangers of extensive usage and dependency of generative AI. This caused the world to reconsider further corporate adoption and usage, as the new technology also brought new challenges to (non-) existing policies. Even the European commission has worked out a policy for AI, whilst the competitors of ChatGPT worked on their own language model (with Microsoft Copilot and Google Bard as an example).
These language models can all have different applications when they are used in a certain way. With OpenAI's ChatGPT, Google's Bard and Microsoft's Copilot I’ve named three of them. Facebook is also working on its own LLama and besides those Tech Giants, there already exists hundreds of startups with trained learning language models for niche usage.
Generative AI explained
As I have understood the matter, not being a master Phd. in Machine Learning or Computer Science; the technology learns by a learning language model (LLM). Meaning it is programmed to generate data (answers) from a certain data-set (the internet) by a simple request (prompt).
The algorithm learns and improves its answers from the language it is fed, the data-set and the further usage. Only, it happens more conversational then just a Google Search and the results are actually actionable; the algorithm can even “pretend” to be some online identity.
I keep thinking of a FAQ that crossed a chatbot and went in overdrive, but you probably get the picture: on the one end it is an algorithm that can create almost anything you ask it to, from a dataset it has access to, on the other end it enriches the dataset and improves its creations by learning from its usage.
In combination with conversational AI (dialogue cfr. ChatGPT), generative AI is like an interactive chatbot with more focused knowledge and information, depending on the question - which is also included in the learning process of the LLM.
The reasons why OpenAI was gender biassed at first was because the majority of the global internet users have that point of view. Something we as people improve for ethical reasons, but the fact that the data of 30 years of internet usage shows that the world has a gender bias did not get the attention it deserved.
Just to be clear about it, the improved updates of ChatGPT do not have that bias anymore, but are still open source, and ChatGPT4-5 can be used in a closed environment like the enterprise cloud environment or company servers.
Visual by LeewayHertz
Individual focus & prompting
Like I said before, the massive adoption of the technology was mainly caused by gains on an individual level. 70% of the companies don't even have an AI-policy and even Europe only just made one. Because also company information is shared on OpenAI, companies rather wait for the ChatGPT4-5 integrations and the adjusted offerings of their current technology providers.
Still, there are corporate gains in the individual usage of OpenAI, without sharing any sensitive information, but that is a matter of perspective. Generative AI works with assignments, requests or “prompts” as they are called. These are generated individually, or even automatically by generative AI that is improving itself.
A prompt must have the following characteristics:
make it clear and specific
keep it relevant and add context
be consistant and give direction
Ensuring that a prompt is clear, relevant, and consistent increases the likelihood of a satisfactory result from generative AI. It helps the model better understand expectations and generate more accurate and actionable answers.
To put it more sharply, the “garbage in, garbage out”-principle is definitely a factor one must be aware of when using generative AI, making “creating decent prompts” a new skill for corporate use. That might be something recruiters should think of when reading “another” AI- generated resume or cover letter: People who can write a decent prompt, not only have a good understanding of the topic, the context and the goal of the prompt, they themselves learned a new technology pretty quickly too.
Anyway, a good source for HR related prompts is the article of Neelie Verlinden on AIHR; giving away +21 prompts for ChatGPT and HR productivity.
Also these tips come in handy:
Conclusion
In conclusion, the unseen growth of generative AI technology, exemplified by OpenAI has brought forth remarkable advancements and challenges alike. It has spurred discussions on ethics and policies. Prompt generation is becoming a new skill set, whilst the technology is offering immense potential for automation and innovation in various industries.
As this new technology continues to evolve, striking a balance between good and responsible usage, continuous improvement, and user-centric applications will be crucial for its long-term success.
Comentários