Author: admin

Shadow AI and Data Poisoning in Large Language Models: Implications for Global Security and Mitigation Strategies

by Igor van Gemert, Expert on Generative AI and CyberResilience Copyright 2024 CyberResilience July 18, 2024 1. Introduction: The Emergence of Shadow AI in the Era of Large Language Models In recent years, the rapid development and widespread adoption of artificial intelligence (AI) have brought about a revolution in technological capabilities. Among the most transformative advancements are Large Language Models (LLMs), which have ushered in a new era of possibilities. However, alongside this progress, a significant security concern has emerged, known as “shadow AI.” This phenomenon involves the unauthorized and uncontrolled use of AI tools and systems within organizations, often without proper oversight from IT or security departments. Imagine a scenario where employees, driven by the ease of access to powerful AI tools like ChatGPT, begin adopting these solutions for their work without going through official channels. ChatGPT, for instance, garnered an astounding 100 million weekly users within a year of its launch, making it simple for individuals to integrate AI into their workflows. This accessibility distinguishes shadow AI from traditional shadow IT, making it more pervasive and challenging to detect. As organizations navigate the complexities of shadow AI, they must also contend with the growing threat of data poisoning attacks. These attacks target the training data of AI models, including LLMs, introducing vulnerabilities, backdoors, or biases that can compromise the security, effectiveness, and ethical behavior of these models. This exploration delves into the intricacies of shadow AI and data poisoning, examining their potential impacts on global security, the challenges in detection and mitigation, and strategies for addressing these emerging threats. 2. Understanding Shadow AI: Definitions and Implications Shadow AI refers to the use of AI tools and technologies within an organization without the knowledge, approval, or oversight of the IT department or relevant authorities. Picture an employee using public AI services like ChatGPT for work-related tasks, deploying AI models or algorithms without proper vetting, integrating AI capabilities into existing systems without authorization, or developing AI applications independently within departments without central coordination.Several factors contribute to the rise of shadow AI. The accessibility of AI tools has lowered the barrier to entry for non-technical users, allowing them to harness the power of AI without requiring specialized knowledge. The rapid advancement of AI technologies often outpaces organizational policies and governance structures, creating a gap between innovation and regulation. Employees, driven by the perceived productivity gains, may turn to AI tools to enhance their efficiency and output. However, many users may not fully understand the risks associated with unauthorized AI use, leading to unintended consequences.The implications of shadow AI for organizations are far-reaching. Security risks loom large, as unauthorized AI use can introduce vulnerabilities and expose sensitive data. Compliance issues arise when shadow AI practices violate regulatory requirements, leading to legal and financial repercussions. Data privacy concerns mount, as AI tools may process and store sensitive information in ways that contravene data protection laws. Uncoordinated AI use can result in inconsistent outputs and decision-making across an organization, while resource inefficiencies stem from duplicate efforts and incompatible systems. 3. The Mechanics of Data Poisoning in Large Language Models Copyright 2024 CyberResilience Data poisoning is a type of attack that targets the training data of AI models, including LLMs. By manipulating the training data, attackers can introduce vulnerabilities, backdoors, or biases that compromise the security, effectiveness, and ethical behavior of the model. Imagine a scenario where an attacker injects mislabeled or malicious data into the training set, causing the model to produce specific outputs when encountering certain triggers. This type of attack is known as label poisoning or backdoor poisoning. Another form of data poisoning involves modifying a significant portion of the training data to influence the model’s learning process. This can entail injecting biased or false information into the training corpus, skewing the model’s outputs. Model inversion attacks, although not strictly poisoning attacks, exploit the model’s responses to infer sensitive information about its training data, which can be used in conjunction with other methods to refine poisoning strategies. Stealth attacks involve strategically manipulating the training data to create hard-to-detect vulnerabilities that can be exploited after deployment, preserving the model’s overall performance while introducing specific weaknesses. The process of poisoning an LLM typically involves several steps. Attackers first gather or generate a set of malicious training samples. For backdoor attacks, a trigger (such as a specific phrase or pattern) is crafted to activate the poisoned behavior. The poisoned samples are then introduced into the training dataset, either during initial training or fine-tuning. The LLM is trained or fine-tuned on the contaminated dataset, incorporating the malicious patterns. Once deployed, the poisoned model can be exploited by inputting the trigger or leveraging the introduced vulnerabilities. Detecting data poisoning in LLMs presents several challenges. The scale of training data for LLMs is massive, making comprehensive inspection impractical. The complexity of these models adds another layer of difficulty, as it is challenging to trace the impact of individual training samples. Advanced poisoning methods can be designed to evade detection by maintaining overall model performance, while the “black box” nature of deep learning models complicates efforts to identify anomalous behaviors. 4. Global Security Implications of Shadow AI and Data Poisoning Copyright 2024 CyberResilience The combination of shadow AI and data poisoning poses significant risks to global security across various domains. Imagine a scenario where poisoned LLMs deployed through shadow AI channels generate vast amounts of coherent, persuasive misinformation. Research by Zellers et al. (2019) demonstrated how GPT-2, a precursor to more advanced models, could generate fake news articles that humans found convincing. Such capabilities could undermine democratic processes through targeted disinformation, erode public trust in institutions and media, and exacerbate social and political divisions. As AI systems become integrated into critical infrastructure, shadow AI and data poisoning could lead to subtle manipulations with potentially catastrophic consequences. A study by Kang et al. (2021) explored the potential impact of AI-driven attacks on power grids, highlighting the need for robust security measures. Disruption of energy distribution systems, compromise of transportation networks, and interference with financial markets and trading systems are among the potential impacts. In the realm of national security and intelligence, the use of compromised LLMs in intelligence analysis could lead to flawed strategic assessments and policy decisions based on manipulated information. A report by the RAND Corporation (2020) emphasized the potential for AI to transform intelligence analysis, underscoring the importance of securing these systems. Misallocation of defense resources based on false intelligence, erosion of diplomatic relations due to AI-generated misunderstandings, and vulnerability of classified information to extraction through poisoned models are critical concerns. Shadow AI practices can inadvertently expose sensitive data to unauthorized AI systems, while data poisoning can create new attack vectors for cybercriminals. Increased risk of data breaches and intellectual property theft, exploitation of AI vulnerabilities for network intrusions, and compromise of personal privacy through model inversion attacks are potential outcomes. The financial sector’s increasing reliance on AI for trading, risk assessment, and fraud detection makes it particularly vulnerable to shadow AI and data poisoning threats. Market manipulation through poisoned trading algorithms, erosion of trust in financial institutions due to AI-driven errors, and potential for large-scale economic disruptions are significant risks. 5. Challenges in Detecting and Mitigating Shadow AI and Data Poisoning Addressing the threats posed by shadow AI and data poisoning presents numerous challenges. The sheer size and complexity of modern LLMs make comprehensive security audits computationally intensive and time-consuming. For instance, GPT-3, one of the largest language models, has 175 billion parameters, making it extremely challenging to analyze thoroughly. This difficulty in identifying all potential vulnerabilities, coupled with high computational costs for security assessments and challenges in real-time monitoring of model behaviors, underscores the scale of the problem. The lack of interpretability in deep neural networks, often referred to as the “black box” problem, makes it challenging to trace decision-making processes and identify anomalous behaviors. This difficulty in distinguishing between legitimate model improvements and malicious alterations, explaining model decisions for regulatory compliance, and identifying the source and extent of data poisoning adds another layer of complexity. The rapid evolution of AI technologies often outpaces the creation of governance frameworks and security measures. This constant need to update security protocols and best practices, coupled with challenges in developing standardized security measures across different AI architectures and maintaining up-to-date expertise among security professionals, highlights the dynamic nature of the threat landscape.The vast amounts of data used to train LLMs make it challenging to vet and validate all sources, increasing the risk of incorporating poisoned data. The impracticality of manual data inspection, difficulty in establishing provenance for all training data, and challenges in maintaining data quality while preserving diversity further complicate the situation.Organizations face the challenge of fostering AI innovation while maintaining robust security measures, often leading to tensions between development teams and security departments. The risk of stifling innovation through overly restrictive security policies, potential for shadow AI adoption as a workaround to security measures, and the need for cultural shifts to integrate security into the AI development process are critical considerations. 6. Mitigation Strategies and Best Practices To address the risks associated with shadow AI and data poisoning, organizations should consider implementing a comprehensive set of mitigation strategies. Establishing clear guidelines for AI deployment and usage within the organization is crucial. This includes creating processes for requesting and approving AI projects, defining roles and responsibilities for AI oversight, establishing ethical guidelines for AI development and use, and implementing regular policy reviews to keep pace with technological advancements. Creating a designated team responsible for overseeing AI projects can help ensure compliance with security and privacy policies. This team should review and approve AI initiatives across the organization, conduct risk assessments for proposed AI deployments, monitor ongoing AI projects for potential security issues, and serve as a central point of expertise for AI-related questions and concerns. Implementing robust data validation techniques is essential to mitigate the risk of data poisoning. This includes conducting statistical analysis to identify anomalies in training data, implementing anomaly detection algorithms to flag suspicious data points, using clustering techniques to identify and isolate potentially malicious samples, and establishing clear data provenance while maintaining detailed records of data sources. Performing ongoing evaluations to identify unauthorized AI deployments and potential vulnerabilities is crucial. This involves conducting network scans to detect unauthorized AI tools and services, performing penetration testing on AI systems to identify vulnerabilities, analyzing model outputs for signs of poisoning or unexpected behaviors, and reviewing access logs and user activities related to AI systems. Educating employees about the risks associated with shadow AI and the importance of following organizational protocols for AI usage is vital. Training programs should cover the potential risks and consequences of unauthorized AI use, proper procedures for requesting and implementing AI solutions, best practices for data handling and privacy protection, and recognition of potential signs of data poisoning or model compromise. Using identity and access management solutions to restrict access to AI tools and platforms based on user roles and responsibilities can help prevent unauthorized use. This includes implementing multi-factor authentication for AI system access, using role-based access control (RBAC) to limit system privileges, monitoring and logging all interactions with AI systems, and implementing data loss prevention (DLP) tools to protect sensitive information. Creating sophisticated tools to analyze internal representations and decision processes of LLMs is crucial for detecting potential compromises. This involves leveraging techniques from explainable AI research to improve model interpretability, developing methods for visualizing and analyzing neural network activations, creating tools for comparing model behaviors across different versions and training runs, and implementing continuous monitoring systems to detect anomalous model outputs. Developing global standards for AI development and deployment, including certification processes for AI systems used in critical applications, is essential for addressing the global nature of AI threats. Participating in international forums and working groups on AI security, collaborating with academic institutions and research organizations, sharing threat intelligence and best practices across borders, and advocating for harmonized regulatory frameworks for AI governance are key steps. 7. Ethical and Legal Considerations The rise of shadow AI and the threat of data poisoning raise complex ethical and legal questions that organizations must address. Determining responsibility for the actions of AI systems with hidden capabilities is challenging, particularly when the line between developer intent and emergent behavior is blurred. Establishing clear lines of responsibility for AI system outputs, developing frameworks for assessing liability in cases of AI-related harm, and considering the role of insurance in mitigating risks associated with AI deployment are critical considerations. Balancing the need for transparency in AI development with concerns about data privacy and intellectual property protection is crucial. Compliance with data protection regulations such as GDPR and CCPA, ethical use of personal data in AI training and deployment, and protecting proprietary algorithms and model architectures while ensuring transparency are key aspects to consider. Evolving ethical guidelines for AI research and development to address the unique challenges posed by potential shadow capabilities is necessary. Developing codes of conduct for AI researchers and developers, implementing ethics review boards for AI projects, and considering the long-term societal impacts of AI technologies are essential steps.8. Future Horizons: Emerging Technologies and Long-term Implications As we look to the future, several emerging technologies and trends will shape the landscape of AI security. Exploring the potential of quantum algorithms for more robust AI security testing and potentially quantum-resistant AI architectures is an active area of research. Quantum-enhanced encryption for AI model protection, quantum algorithms for faster and more comprehensive security audits, and the development of quantum-resistant AI architectures are potential developments. Investigating brain-inspired computing architectures that might offer inherent protections against certain types of attacks or provide new insights into creating more interpretable AI systems is promising. AI systems with improved resilience to adversarial attacks, more efficient and interpretable AI models inspired by biological neural networks, and novel approaches to anomaly detection based on neuromorphic principles are potential developments. Considering how current security challenges might evolve in the context of more advanced AI systems approaching artificial general intelligence is crucial. The work of Bostrom (2014) on superintelligence provides a framework for considering long-term AI safety. Increased complexity in securing systems with human-level or superhuman capabilities, ethical considerations surrounding the rights and responsibilities of AGI systems, and the potential for rapid and unpredictable advancements in AI capabilities are significant implications. Conclusion: Navigating the Perils of Shadow AI and Data Poisoning in a Hyper-Connected WorldThe advent of shadow AI and the insidious threat of data poisoning in Large Language Models (LLMs) represent more than just technical challenges—they signify profound risks to global security, economic stability, and societal trust. In a world increasingly reliant on AI-driven decisions, the unchecked proliferation of shadow AI can undermine the very foundations of organizational integrity and operational security. Meanwhile, the specter of data poisoning looms large, threatening to compromise not just individual models but the ecosystems that depend on their reliability. Consider the ramifications: poisoned LLMs could generate sophisticated misinformation campaigns, destabilize critical infrastructure, and corrupt national security intelligence. These aren’t abstract risks — they are present and escalating dangers that require immediate and concerted action. The impact on democratic processes, public trust, and economic stability could be devastating, with consequences reverberating across the globe. Organizations must recognize that the fight against shadow AI and data poisoning is not just an IT issue — it is a strategic imperative that demands attention at the highest levels of leadership. Implementing robust AI governance policies, investing in advanced detection and mitigation technologies, and fostering a culture of security and compliance are essential steps. The need for centralized oversight, rigorous data validation, and continuous monitoring cannot be overstated. Moreover, the ethical and legal dimensions of AI usage must be addressed head-on. Establishing clear accountability for AI systems, ensuring compliance with data protection regulations, and developing ethical guidelines for AI development are crucial for maintaining public trust and safeguarding privacy. The path forward requires a global effort. International cooperation in developing and enforcing AI security standards, sharing best practices, and collaborating on threat intelligence is vital. The stakes are too high for a fragmented approach; a unified, proactive stance is necessary to mitigate these risks effectively. As we look to the future, emerging technologies such as quantum computing and neuromorphic architectures offer promising avenues for enhancing AI security. However, these advancements must be pursued with a vigilant eye toward potential new vulnerabilities. The journey towards artificial general intelligence (AGI) will only amplify these challenges, making it imperative to embed security and ethical considerations into the very fabric of AI research and development. In conclusion, navigating the perils of shadow AI and data poisoning requires a multifaceted strategy that blends technological innovation with rigorous governance, ethical stewardship, and international collaboration. The time to act is now — before the unseen threats of shadow AI and data poisoning erode the pillars of our interconnected world. By taking decisive steps today, we can safeguard the promise of AI and ensure it remains a force for good in our society. Citations:[1] https://www.wiz.io/academy/shadow-ai [2] https://securiti.ai/blog/ai-governance-for-shadow-ai/ [3] https://arxiv.org/abs/2404.14795 [4] https://blog.barracuda.com/2024/06/25/shadow-AI-what-it-is-its-risks-how-it-can-be-limited Here is a list of the academic sources referenced in the article: About the Author Igor van Gemert is a prominent figure in the field of cybersecurity and disruptive technologies, with over 15 years of experience in IT and OT security domains. As a Singularity University alumnus, he is well-versed in the latest developments in emerging technologies and has a keen interest in their practical applications.Apart from his expertise in cybersecurity, van Gemert is also known for his experience in building start-ups and advising board members on innovation management and cybersecurity resilience. His ability to combine technical knowledge with business acumen has made him a sought-after speaker, writer, and teacher in his field.Overall, van Gemert’s multidisciplinary background and extensive experience in the field of cybersecurity and disruptive technologies make him a valuable asset to the industry, providing insights and guidance on navigating the rapidly evolving technological landscape.

article Europe and Russia Expecting the Future 630x350 1 - Club of Amsterdam

Europe and Russia: Expecting the Future

by Alexander Sokolov, Mikhail SalazkinPublished January 25, 2009 Abstract Year 2008 saw a large-scale population survey of nine European countries, including Russia, aimed at identifying the perceptions of Europeans about their future. The survey covered the most important aspects of human life: work, family, environment, integration, security, consumption, education, relationships between rich and poor. The survey results for each participant country on all topics are compared and interpreted. The analysis revealed a quite complete but contradictory picture of Europeans’ and Russians’ future visions of the Europeans and Russians, their aspirations and preparedness for future events. The responses were dominated by skeptical views. Causes for such an attitude require a more in-depth analysis. Monitoring the evolution of public opinion on the future requires the implementation of similar surveys on a regular basis. Read here Published in Foresight Russia Foresight Russia is a high quality peer reviewed journal indexed in Scopus (already ranked in Q3), GoogleScholar, SSRN, RePEc, EBSCO, etc. The journal supports the dissemination of science and innovation studies, work on Foresight and science and technology policies. It also provides a framework for discussion of S&T and innovation trends and policies. We invite authors of original research papers to examine the limits and potentialities for foresight exercises as an imperative to innovation, and their relevance, interactive effects, and contribution to science, technology and innovation policy. Empirical, qualitative, and quantitative contributions are highly welcome. The jounal is published by the National Research University – HIgher School of Economics 4 times a year in Russian (paper version) and English (electronic version)

Know Your Rhythm – The Future Now Show

with Arnab Bishnu Chowdhury & Felix Bopp “Know Your Rhythm” is a training programme and network that helps participants discover their own sense of Rhythm in life and work, creating ‘conditions’ to experience Aha! Moment, raising well-being, wellness, empathy, teamwork, leadership. Know Your Rhythm is founded by Arnab Bishnu Chowdhury who is based out of Sri Aurobindo Ashram and Auroville, India. Arnab is inspired by Integral Yoga, founded by Sri Aurobindo and Mirra Alfassa (The Mother). Arnab is an Indian composer – trainer – therapist – researcher, 3rd generation from a family of Indian Classical musicians from Senia Maihar Gharana founded by Baba Allaudin Khan, master-teacher to sitar maestro Ravi Shankar. Arnab’s eclectic music inspires itself from the healing properties of ancient Indian classical music, harmony from Western classical, sound design from electronic music and AI. His therapeutic music has been tested by doctors in clinical settings, healed patients and seekers, choreographed in the form of innovative ballet. Moderator Credits Arnab Bishnu ChowdhuryComposer / Educator / Therapist / Explorer of ConsciousnessPuducherry, Indiawww.ninad.in ModeratorFelix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdamclubofamsterdam.com The Future Now Showclubofamsterdam.com/the-future-now-show You can find The Future Now Show also atLinkedIn: The Future Now Show GroupYouTube: The Future Now Show Channel

Club of Amsterdam Journal, March 2025, Issue 272

Club of Amsterdam Journal, March 2025, Issue 272 HomeJournals ArchiveJournals – Main TopicsClimate Change Success StoriesThe Future Now Shows Club of Amsterdam SearchSubmit your articleContactDonateSubscribe CONTENT Lead ArticleNuclear fusion could one day be a viable clean energy source – but big engineering challenges stand in the wayby George R. Tynan and Farhat Beg, University of California San Diego Article 01Energy Democracy: Building a Green, Resilient Future through Public and Community Ownership by Demos, New York The Future Now Show Energy Diplomacieswith Adriaan Kamp & Patrick Crehan Article 02 Profitable solutions for the planet, with Dr. Bertrand Piccard by Inside Ideas News about the Future> E-Skimo> RebuiLT project shows it’s possible to build differently Article 03Academic Reflection: Analysis of the O1 Chess Environment Exploitation Incident by Igor van Gemert, CEO focusing on cyber security solutions and business continuity Recommended BookNot the End of the World: How We Can Be the First Generation to Build a Sustainable Planetby Hannah Ritchie Article 04People and Planet Health : The Role of Healthcare Professionals in Climate Actionby BupaSolutions for the PlanetHealthcareHealthcare Corporate ExamplesHealthcare Holistic, Naturebased Examples Futurist PortraitHenrik von ScheelAuthored the 4th Industrial Revolution Tags4th Industrial Revolution, ARCHITECTURE, Artificial Intelligence, Chess, Club of Rome, DEMOCRACY, Diplomacy, ECONOMY, ENERGY, Healthcare, Heritage Foundation, Nuclear Fusion, Oil Industry, Ski, Sustainability, UN Welcome   Felix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdam Website statistics forclubofamsterdam.comFebruary 2025: 2025   visits 162,200 visitors 50,800 2024   visits 553,500 visitors 181,000 For Insiders Our website is enjoying a fantastic group of visitors and we get a lot of friendly comments that support us. THANK YOU we appreciate it! We would like to maintain the quality of our service, but need your support.Please support us and donate: And:Please have a look at the guidelines for posting comments. Quotes Hannah Ritchie: “In 2010 I started my degree in Environmental Geoscience at the University of Edinburgh. I showed up as a fresh-faced 16-year-old, ready to learn how we were going to fix some of the world’s biggest challenges. Four years later, I left with no solutions. Instead, I felt the deadweight of endless unsolvable problems. Each day at Edinburgh was a constant reminder of how humanity was ravaging the planet.” Henrik von Scheel: “The more resistant we are to change, the harder it will be for us to adapt. The people that don’t survive are the people resistant to change.” Adriaan Kamp: “We want to deepen our understanding, inspire more open, creative and richer conversations, advance our learning, and more smartly organize and enable a true reform- of what today is becoming so clearly broken.”

Energy Diplomacies – The Future Now Show

with Adriaan Kamp & Patrick Crehan “The meeting covers Adriaan Kamp’s journey from the oil industry to advocating for sustainable energy solutions, highlighting the need for a transformation in the current economic system towards a more collaborative and sustainable approach. Discussions touched on the impact of ideologies and political changes on global sustainability efforts, emphasizing the importance of diplomacy, mutual respect, and cooperation in international relations. The conversation concluded with a call for institutional reform, particularly at the UN level, to better address current global challenges and foster harmonious dialogues between nations.” – AI summary by Zoom Moderator Credits Adriaan KampFounder of Energy For One WorldThe Hague, Netherlandswww.energyforoneworld.com ModeratorPatrick CrehanFounder and Director at Crehan, Kusano & Associateswww.cka.beFormer Director of the Club of Amsterdamclubofamsterdam.com Felix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdamclubofamsterdam.com The Future Now Showclubofamsterdam.com/the-future-now-show You can find The Future Now Show also atLinkedIn: The Future Now Show GroupYouTube: The Future Now Show Channel

Club of Amsterdam Journal, Febuary 2025, Issue 271

Club of Amsterdam Journal, February 2025, Issue 271 HomeJournals ArchiveJournals – Main TopicsClimate Change Success StoriesThe Future Now Shows Club of Amsterdam SearchSubmit your articleContactDonateSubscribe CONTENT Lead ArticleNew set of human rights principles aims to end displacement and abuse of Indigenous people through ‘fortress conservation’by John H. Knox, Professor of International Law, Wake Forest University Article 01Livestock animals use very different amounts of antibiotics by Our World in Data, Author Hannah Ritchie The Future Now Show New Forms of Governancewith Rob van Kranenburg & Reto Brosi Article 02 Hans Labohm: Chronicles of Climate Hysteria by Tom Nelson Pod News about the Future> Bittensor> Carbon OrchardArticle 03Reclaiming Control: An Analysis of Europe’s Path to Digital Sovereignty by Igor van Gemert, CEO focusing on cyber security solutions and business continuity Recommended BookEmbedding Enterprise Risk Management and Building Resilience:the Practical Guideby Reto Brosi & Ching Guei Tan Article 04The Internet of Medical Things: Revolutionizing patient-centric careby GE HealthCare Climate Change Success StoryIoT (Internet of Things) Examples Futurist PortraitScott SteinbergThe Master of Innovation TagsAgriculture, ARCHITECTURE, Data, Digital Governance, ENERGY, FOOD, Healthcare, Indigenous, Innovation,Internet of Things, IoMT, IoT, MOBILITY, The Internet of Medical Things, Water, Wildlife Welcome Felix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdam Website statistics forclubofamsterdam.comJanuary 2025: 2025   visits 49,600 visitors 13,400 2024   visits 553,500 visitors 181,000 For Insiders Our website is enjoying a fantastic group of visitors and we get a lot of friendly comments that support us. THANK YOU we appreciate it!Close to 1,000 comments reach us daily and the number is growing. We are handling this workload on a voluntary basis and are now beyond our capacity. We would like to maintain the quality of our service, but need your support.Please support us and donate: And:Please have a look at the guidelines for posting comments. Quotes Rob van Kranenburg: “One of the most vital arguments in my upcoming #Statecraft and #Policymaking in the Age of Digital Twins, Digital Democracy and the Internet of Things is building a European phone.” Scott Steinberg: “Unpredictability is the only thing we can predict and uncertainty the only certainty for businesses and working professionals going forward.” Hans Labohm: “I was already afraid that the climate hype would give rise to more regulation and a tendency of our economic model towards central planning. And we knew, from the experiences and also from the theory, of the 70s that the central plan produces model and less prosperity and less freedom for the people. So as an economist, I was afraid of such a development.”

New Forms of Governance – The Future Now Show

With Rob van Kranenburg & Reto Brosi “Rob van Kranenburg & Reto Brosi talk about the evolution of data gathering and management systems, the challenges of technological advancement and governance, and the potential of creating a European phone to regain control over personal data and services. They also explore the concept of disposable or pseudonymous identities, the importance of engineers in governance discussions, and the need for a working group on cybernetics to explore new governance models. Lastly, they touche on the concerns of intelligence agencies and the role of influential individuals in the digital world.” AI summary by Zoom Moderator Statecraft and Policymaking in the Age of Digital TwinsDigital Democracy and the Internet of Thingsby Rob van Kranenburg Credits Rob van KranenburgSenior Policy and Communication Expert at Martel InnovateGhent, Belgiummartel-innovate.com IoT Counciltheinternetofthings.eu Disposable Identities – blogdisposableidentities.eu Moderator Reto BrosiEfficient and effective ERMBasel, Switzerland Megrow Consulting GmbHManaging Directorwww.megrow.ch Felix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdamclubofamsterdam.com The Future Now Showclubofamsterdam.com/thefuturenowshow

Club of Amsterdam Journal, December 2024 / January 2025, Issue 270

Club of Amsterdam Journal, December 2024 / January 2025, Issue 270 HomeJournals ArchiveJournals – Main TopicsClimate Change Success StoriesThe Future Now Shows Club of Amsterdam SearchSubmit your articleContactDonateSubscribe CONTENT SPECIAL EDITION ABOUT PREFERRED FUTURES Opening What Is a Preferred Future?by Andy Hines Article 01Futures for the Heart by Wendy Schultz, Director, Infinite Futures The Future Now Show Cryptocurrencies – Quo vadis?with Peter Maissen, Rohit Talwar, Chris Skinner, Hardy Schloer & Mario de Vries Article 02 Preferred Future with Glen Hiemstra News about the Future> Boat Lift, Scotland> Green Software Foundation Article 03Cybersecurity Labor Shortage in Europe: Challenges and Solutions by Igor van Gemert, Expert on Generative AI and CyberResilience Recommended BookThe Circular Economy: A User’s Guideby Walter R. Stahel Article 04From Web 1.0 to Web 3.0: The Evolution of Digital Identityby Jim Hartsema and Peter van GorselClimate Change Success StoryCircular EconomyReduce / Reuse / Recycle / Recover Futurist PortraitMarkku WileniusUNESCO Chair in Learning Society and Futures of Education TagsAML, Blockchain, Circular Economy, Community Visioning, Crypto Token, Cryptocurrencies, Developmental Visioning, Digital Identity, FOOD, Green Software, KYC, Meditative Visioning, Mental Imagery, Nature, Neuroscience, Plastic, Preferred Futures, Strategic Planning, Visioning Methods WelcomeFelix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdam Website statistics forclubofamsterdam.com November 2024:2024visits 453,000visitors 156,000 For InsidersOur website is enjoying a fantastic group of visitors and we get a lot of friendly comments that support us. THANK YOU we appreciate it!Close to 1,000 comments daily reach us and the number is growing. We are handling this workload on a voluntary basis and are now beyond our capacity. We would like to maintain the quality of our service, but need your support.Please support us and donateAnd:Please have a look at the guidelines for posting comments. Quotes Any Hines: “Many times, organizations are unclear about what decisions they need to make or what they need to learn. Investing time upfront in clarifying a focal issue will pay dividends in keeping the activity focused and relevant.” Wendy Schultz: “Our futures emerge from the collision of changes, impacts – and dreams. Identify the changes, explore the impacts, and hear the different dreams – then collaborate on transformational paths forward.” Rohit Talwar: “Listen to the other person first until you’ve really heard their perspective, however much of a rush you are in to get your point across. The more people can see that you listen and care what they have to say, the more you’ll be heard, and the faster things will get done.”  “Preferred futures” refer to the ideal or most desirable visions of the future that individuals, organizations, or societies aim to achieve. These are often crafted by thinking about what a better, more prosperous, and equitable future would look like, then backcasting – planning backwards from that future to identify the steps needed to get there. Here are a few ways “preferred futures” are used: 1. Strategic Planning: Organizations use preferred futures to set long-term goals and create a clear direction. This helps guide innovation and prioritize initiatives that align with their vision. 2. Community Visioning: Communities may explore their preferred futures to identify shared values and create cohesive goals, like improving local infrastructure, education, or environmental stewardship. 3. Global Challenges: On a larger scale, preferred futures are relevant in discussions on climate change, economic equity, and technological advancement, aiming for sustainable solutions to global issues. 4. Personal Development: Individuals can also apply the concept to set personal life goals by envisioning a future aligned with their values and taking actionable steps to move toward it. Exploring preferred futures can be transformative, giving clarity on desired outcomes and creating actionable steps to bridge the gap between present conditions and aspirational goals.  = ChatGPT