Global leaders deal with chaotic and changing situations, often filled with tension and conflict, which can make people feel excluded and prevent #community building. Two disparate yet connected fields grapple with the continued emergence of ethical dilemmas: #Technology and #AI, and #DEI. As much as we think these fields aren’t related, how we manage #conflict, check our assumptions, and navigate connection is what creates the path forward.
Read MorePart 5: The Ethical Imperative: Redefining Organizational Culture in the Age of AI
As we conclude our exploration of the evolving data culture in corporate America, we find ourselves at a critical juncture. The pendulum swings we've observed—from the data-driven efficiency focus of the 1990s to the purpose-driven revolution of the 2010s—have set the stage for a new era of complexity. The advent of artificial intelligence (AI) is not just another technological advancement; it represents a fundamental shift in how organizations must approach skill development and organizational design for data, purpose, and ethics.
Read MorePart 4: The Trilemma of Modern Business: Navigating Data, Purpose, and Ethics in the AI Era
As we stand at the precipice of a new era in corporate evolution, the landscape before us is far more complex and nuanced than we could have imagined even a decade ago. The simple dichotomies of the past—efficiency versus humanity, data versus intuition—have given way to a trilemma that threatens to reshape the very foundations of organizational structure and leadership. This piece aims to unravel the intricate web of challenges facing modern businesses as they attempt to balance data-driven decision making, purpose-driven cultures, and the looming ethical considerations of the AI age.
Read MorePart 2: The Dark Side of Data: Unintended Consequences and Ethical Dilemmas
As the new millennium dawned, the data-driven paradigm that emerged in the 1990s had become firmly entrenched in corporate America. Organizations across industries were collecting, analyzing, and acting on data at an unprecedented scale. However, as with any transformative shift, the rise of data-driven management brought with it a host of unintended consequences and ethical challenges. This article explores the darker side of the data revolution, examining the limits of metrics-based management, the human cost of extreme efficiency, and the emerging ethical dilemmas of the data age.
Read MoreMy memories are Metas training data - How Politics and Technology Meet
Technology is not neutral; it is always shaped by human hands. Meta’s plans to use personal content posted by Facebook and Instagram users to train algorithms suggest our digital histories are being repackaged to teach AI about—and how to mimic—humanity.
How should content be governed?
The roots of Silk Road creator Ross Ulbricht's story run deep into American libertarianism. Trump's recent pledge to commute Ulbricht's sentence sheds light on the intertwining of politics and tech.
Read an insightful extract on Silk Road's rise and fall by Joshuah Bearman: https://lnkd.in/ghr_A5iy
Transparency and Explainability Don't Equal Trust
Trust is transitioning from institutional to "distributed," shifting authority from leaders to peers, which is often overlooked and perpetuates trust issues. If trust is predictable, it isn’t needed – is it? If the inner workings of AI, government, and the media were just more transparent, if we knew how they worked, we think we wouldn’t really need to “trust” so much. It would be more predictable.
Read MoreIt is entirely possible to build something without understanding it.
Really interesting article explaining three laws of artificial intelligence from The Third Law by George Dyson. Adapted from POSSIBLE MINDS: Twenty-Five Ways of Looking at AI edited by John Brockman.
It is entirely possible to build something without understanding it. We shouldn’t be asking “can we” we should first ask “should we?”
The article’s premise is that we spend too much time focusing on machine intelligence and not enough about “self-reproduction, communication, and control.” Dyson argues that the next revolution in computing will be signaled by the rise of analog systems over which digital programming no longer has control (reminds me of the ending of Battlestar Galactica). Nature’s response to those who believe they can build machines to control everything will be to allow them to build a machine that controls them instead.
The three laws of artificial intelligence listed in the article are:
Ashby’s law, after cybernetician W. Ross Ashby, author of Design for a Brain, stating that any effective control system must be as complex as the system it controls.
The second law, articulated by John von Neumann, states that the defining characteristic of a complex system is that it constitutes its own simplest behavioral description. The simplest complete model of an organism is the organism itself. Trying to reduce the system’s behavior to any formal description makes things more complicated, not less.
The third law states that any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.
The author continues:
The third law offers comfort to those who believe that until we understand intelligence, we need not worry about superhuman intelligence arising among machines. But there is a loophole in the third law. It is entirely possible to build something without understanding it. You don’t need to fully understand how a brain works in order to build one that works. This is a loophole that no amount of supervision over algorithms by programmers and their ethical advisers can ever close. Provably “good” A.I. is a myth. Our relationship with true A.I. will always be a matter of faith, not proof.