The truth? It depends on the design's purpose. An organization’s purpose is informed by its values and profit motivation. Artificial intelligence aims to create autonomous systems that can perform tasks without human intervention, while augmented intelligence seeks to enhance human capabilities by providing AI-powered tools and assistance.
Working with AI tools (e.g., Co-Pilot, Open-AI) should be more conversational and contextual—they should augment our efforts. For example, using the paradigm of a search box sets the expectation for general accuracy. But using that paradigm with AI tools, we should still be challenging the output and accuracy (and many users aren't).
If these tools are true “digital co-workers,” they should use a dialog feature to stimulate the user’s creativity (and illicit their data literacy skills) and help contextualize results.
As AI becomes more integrated into our daily lives, it is rarely used in isolation. For example, my project management ability is only useful if I know when to plan, when to enlist stakeholders, and how to frame a problem.
When people become educated about AI (e.g., in the classroom, within a product, in workforce training, etc.), it's essential to emphasize the digital co-worker's role within the broader process it's being used for:
🔹When and how to use AI effectively
🔹Identifying the steps AI replaces and the new steps it introduces (like quality assurance, data quality processes)
🔹Developing human skills to complement AI (e.g., data storytelling, sensemaking, critical thinking, and decision-making)
🔹Exploring how AI applications can be adapted to different scenarios
🔹Evaluating processes to determine AI's impact and effectiveness
AI shouldn't be “sprinkled on top” of existing solutions or considered a one-size-fits-all answer to any problem. Instead, it's about fostering innovation within an organization by understanding its role in process improvement. We must move beyond just introducing AI tools or “AI-inside” and explore how it can transform our work from the ground up.
👉 WHY SHOULD YOU CARE?
Values influence everything we do and design. They influence how we go about our work and the problems we choose to solve.
As we continue developing AI tools and machine co-workers, we must ensure humans have a voice and are involved in the task at hand. To augment human efforts, humans must throttle AI's agency and autonomy in preserving safety, trust, transparency, and equity for all impacted by its use.
🍸🍸BRIEF SIDEBAR:🍸🍸
As a society, we would be better served if AI came back under the umbrella of cybernetics, the study of how we use digital systems to support humans. AI can become self-perpetuating and take humans out of the loop–and it often does. Many visual diagrams show how AI relates to Cybernetics, but this one is the friendliest.
Here is a colorful history of why AI is separate from Cybernetics: How do you hold a conference and exclude someone you don’t like? Literally, come up with another term, and use that to advertise the conference:
“One reason for inventing the term [AI] was to escape association with cybernetics,” as McCarthy once bluntly explained. “I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him.”