Trust is transitioning from institutional to "distributed," shifting authority from leaders to peers, which is often overlooked and perpetuates trust issues. If trust is predictable, it isn’t needed – is it? If the inner workings of AI, government, and the media were just more transparent, if we knew how they worked, we think we wouldn’t really need to “trust” so much. It would be more predictable.
Read MoreWhat if employees could enter their credentials and skills, share their problem-solving interests, and get dynamically assigned to a team based relevant to the organization’s needs? No more silos. The organization could also crowdsource expert citizen engineers with similar interests and skills for inside-outside problem-solving, where appropriate.
Read MoreThe concepts of taxonomy and folksonomy hold significant implications, especially in the context of emerging technologies like OpenAI. While traditional taxonomies offer structured hierarchies of knowledge, allowing for a systematic approach to information organization, folksonomies represent a more fluid and emergent way of categorizing information based on user-generated tags and metadata.
However, the challenge arises when technological advancements fail to incorporate divergent thinking and promote groupthink through convergent taxonomies. This phenomenon is particularly evident in language models, where developers' linguistic and cultural biases can influence the interpretation and representation of (the dominant) language.
Read More