Make a tax-deductible donation to support CEDA's agenda and help accelerate change
Organisations must begin to recognise prompting as the conduit that interlinks Gen AI with its workforce and their collective domain expertise.
Your first interaction with Generative AI (Gen AI) may have begun with a prompt, and the response was likely meaningful but not quite there.
Few of us attempt a follow-up prompt to improve or reframe the output, let alone a series of prompts, in the style of a conversation.
Prompting or prompt engineering is the process of guiding, designing and structuring instructions (or questions) to a Gen AI model to generate meaningful and useful output.
Despite the growing range of Gen AI models and the frenetic release of multiple versions of these models, prompting has remained stable as the foundational way of interacting with state-of-the-art AI.
The recently announced advanced capabilities of on-device AI, Agentic AI, contextual retrieval and chain-of-thought reasoning are all based on variations of prompting. This stability of prompting has been leveraged by big tech and third parties alike to build an almost endless flow of practical applications and tools for work, study and lifestyle.
Browsing through aggregator websites for these Gen AI tools (such as GPTStore, Toolify, Findmyaitool) can become overwhelming very quickly, while also raising serious concerns for privacy, security, safety and overall responsible adoption of AI.
Addressing these concerns, big tech has produced subscription-based copilots, assistants and agents such as Microsoft Copilot, Google Gemini, Meta AI, Anthropic Claude etc.
These enterprise-level prompting tools are built into or integrated with pre-existing technology platforms and inherit the same privacy, security and regulatory commitments of those platforms.
While this accelerates adoption and ensures enterprise data protection, it still does not address the fundamental flaws of the inaccuracy and bias of AI-generated content.
Reviewing the quality and relevance of content and the process of fact-checking or filling in the gaps are left up to the human domain expert. In some ways, this is a silver lining that fosters responsible adoption of AI through human-in-the-loop review of AI-generated content prior to its transition into a decision or action.
Prompting is still new, and organisations can do better. The contemporary practice of prompting is somewhat confined to individualised productivity gains. Rarely do we keep a record of which prompts worked and why, nor do the tools support this as their business models rely on more prompts, not less.
Organisations must begin to recognise prompting as the conduit that interlinks Gen AI with its workforce and their collective domain expertise.
This conduit is an organisational asset because the composition, purpose and temporality of prompts written by employees carry granular insight into the organisation’s strategic, tactical and operational positioning. Accumulated over time, a repository of all prompts, suitably tagged and indexed, can unlock innovation potential for workforce development, organisational productivity and business growth.
For workforce development, a Gen AI agent (with human collaboration) can process and analyse this prompt repository to extract common themes and focus areas that inform training programs, information sessions and mentoring needs. Similarly, the repository can help detect productivity gaps (lack of prompts, prompt quality) and the development of suitable prompt templates that contextualise organisational knowledge and promote the responsible use of AI.
Business growth eventuates when the prompt repository is used to evaluate the alignment between organisational objectives and the actual work completed. Gaps in this alignment are opportunities for growth and development. This evaluation can also be used to initiate a review of operations at a granular task-based level of detail.
Australian organisations should look to invest and collaborate leveraging Gen AI and prompting to improve their business innovation and productivity.
SAS Chief Privacy Strategist Europe and Asia-Pacific, Kalliopi Spyridaki, writes that to effectively regulate artificial intelligence, countries need to start small, think big and work together.
Read more Opinion article April 23, 2020Medtronic Senior Director: Market Access, Public Affairs & Policy, Andrew Wiltshire, provides an insight into the challenges facing medical technology companies during the COVID-19 crisis and the measures they have taken in response. He also reflects on the lessons for health policy that we can take from the crisis.
Read more Opinion article November 11, 2012Steve Ingram examines the rising capabilities and evolution of cyber criminals and their operations and explains that to combat these cyber criminals, Australian businesses of all sizes and government must work together.
Read more