with Mike Jackson & Reto Brosi In this meeting, Dr. Mike Jackson discusses the evolution of his AI-driven foresight company PreEmpt, and its capabilities in strategic planning and decision-making. He explaines how the system analyses data, generates insights, and provides recommendations for various challenges, emphasizing its potential to revolutionize strategic thinking and organizational planning. The discussion also touches on the system’s security measures, cultural adaptability, and future developments, highlighting its aim to enhance decision-making processes across different sectors.
Club of Amsterdam Journal, April 2025, Issue 273 HomeJournals ArchiveJournals – Main TopicsClimate Change Success StoriesSolutions for the PlanetThe Future Now Shows Club of Amsterdam SearchSubmit your articleContactDonateSubscribe CONTENT Lead ArticleHow music heals us, even when it’s sad – by a neuroscientist leading a new study of musical therapyby Leigh Riby, Professor of Cognitive-Neuroscience, Northumbria University, Newcastle Article 01Accelerate the green transition to defend the EU auto industryby Arvea Marieni The Future Now Show Know Your Rhythmwith Arnab Bishnu Chowdhury & Felix Bopp Article 02 Electric vehicles in Africa: what`s needed to grow the sectorby MJ (Thinus) Booysen, Stellenbosch University and Joubert Van Eeden, Stellenbosch University News about the Future> Fact-checking of climate claims> Rebuilding Ukraine Article 03The Future of AI: Insights from Geoffrey Hinton by Igor van Gemert, CEO focusing on cyber security solutions and business continuity Recommended BookAn Introduction to Indian Music Therapyby Karuna Nagarajan Article 04Toyota in Panic! Traoré Floods Africa with Burkina Faso’s EVs!by Black Culture DiarySolutions for the PlanetArchitecture Sustainable Architecture Movements & ConceptsNotable Sustainable Buildings & CitiesEmerging Technologies & MaterialsThe Future of Sustainable Architecture Futurist PortraitAnand MunshiGlobal futurist TagsMusic, Healthcare, Spirituality, Electric cars, Burkina Faso, Africa, Music Therapy, Auroville,India, Artificial Intelligence, Mobility, EU, China, Car Industry, Ukraine, Climate Change, And:Please have a look at the guidelines for posting comments. Quotes Anand Munshi:“Diversity and Inclusion (D&I) have moved beyond buzzwords to become critical components of organizational success. According to McKinsey & Company’s 2023 report, companies in the top quartile for gender diversity on executive teams were 25% more likely to experience above-average profitability. Bain & Company also highlights that ethnically diverse companies are 36% more likely to outperform their peers financially. However, challenges persist: gender pay gaps, underrepresentation in leadership, and unconscious biases continue to hinder progress.” Arnab Bishnu Chowdhury: “Know Your Rhythm (KYR) has engaged over 20,000 ‘care givers’ of the global society including medical practitioners with their patients, therapists, teachers & students, special educators, youth, managers, professors, administrators at conferences and workshops in India, Austria, France, Belgium, Russia, Thailand, USA.” Leigh Riby:“There is a renewed fascination with the healing powers of music. This resurgence can primarily be attributed to recent breakthroughs in neuroscientific research, which have substantiated music’s therapeutic properties such as emotional regulation and brain re-engagement. This has led to a growing integration of music therapy with conventional mental health treatments.” ChatGPT=
Evolutionary adaptibility by John Stewart The Ukrainian Dilemma and the Bigger Picture by Hardy F. Schloer What more demand for meat means for the future by Christophe Pelletier ICT4D: Top trends for 2014 by Madanmohan Rao Ecological Architecture – some thoughts by Patrick Crehan Ecosophy: Nature’s Guide to a Better World by Elisabet Sahtouris Chronicle from the Future by Rosana Agudo Natural Capital & new innovation strategies by Tom Bosschaert Shadow AI and Data Poisoning in Large Language Models: Implications for Global Security and Mitigation Strategies by Igor van Gemert Socratic Design by Humberto Schwab The impact of culture on education by Huib Wursten and Carel Jacobs Inner peace and generosity by Elisabet Sahtouris TPP — copyright versus free speech by Annie Machon 10-step program for a sick planet by Hardy F. Schloer Homo Economicus by Peter van Gorsel Truth Decay and Truth Engines by Peter Cochrane Europe and Russia: Expecting the Future by Alexander Sokolov, Mikhail Salazkin China’s Future, Your Strategy by Rohit Talwar
by Igor van Gemert, Expert on Generative AI and CyberResilience Copyright 2024 CyberResilience July 18, 2024 1. Introduction: The Emergence of Shadow AI in the Era of Large Language Models In recent years, the rapid development and widespread adoption of artificial intelligence (AI) have brought about a revolution in technological capabilities. Among the most transformative advancements are Large Language Models (LLMs), which have ushered in a new era of possibilities. However, alongside this progress, a significant security concern has emerged, known as “shadow AI.” This phenomenon involves the unauthorized and uncontrolled use of AI tools and systems within organizations, often without proper oversight from IT or security departments. Imagine a scenario where employees, driven by the ease of access to powerful AI tools like ChatGPT, begin adopting these solutions for their work without going through official channels. ChatGPT, for instance, garnered an astounding 100 million weekly users within a year of its launch, making it simple for individuals to integrate AI into their workflows. This accessibility distinguishes shadow AI from traditional shadow IT, making it more pervasive and challenging to detect. As organizations navigate the complexities of shadow AI, they must also contend with the growing threat of data poisoning attacks. These attacks target the training data of AI models, including LLMs, introducing vulnerabilities, backdoors, or biases that can compromise the security, effectiveness, and ethical behavior of these models. This exploration delves into the intricacies of shadow AI and data poisoning, examining their potential impacts on global security, the challenges in detection and mitigation, and strategies for addressing these emerging threats. 2. Understanding Shadow AI: Definitions and Implications Shadow AI refers to the use of AI tools and technologies within an organization without the knowledge, approval, or oversight of the IT department or relevant authorities. Picture an employee using public AI services like ChatGPT for work-related tasks, deploying AI models or algorithms without proper vetting, integrating AI capabilities into existing systems without authorization, or developing AI applications independently within departments without central coordination.Several factors contribute to the rise of shadow AI. The accessibility of AI tools has lowered the barrier to entry for non-technical users, allowing them to harness the power of AI without requiring specialized knowledge. The rapid advancement of AI technologies often outpaces organizational policies and governance structures, creating a gap between innovation and regulation. Employees, driven by the perceived productivity gains, may turn to AI tools to enhance their efficiency and output. However, many users may not fully understand the risks associated with unauthorized AI use, leading to unintended consequences.The implications of shadow AI for organizations are far-reaching. Security risks loom large, as unauthorized AI use can introduce vulnerabilities and expose sensitive data. Compliance issues arise when shadow AI practices violate regulatory requirements, leading to legal and financial repercussions. Data privacy concerns mount, as AI tools may process and store sensitive information in ways that contravene data protection laws. Uncoordinated AI use can result in inconsistent outputs and decision-making across an organization, while resource inefficiencies stem from duplicate efforts and incompatible systems. 3. The Mechanics of Data Poisoning in Large Language Models Copyright 2024 CyberResilience Data poisoning is a type of attack that targets the training data of AI models, including LLMs. By manipulating the training data, attackers can introduce vulnerabilities, backdoors, or biases that compromise the security, effectiveness, and ethical behavior of the model. Imagine a scenario where an attacker injects mislabeled or malicious data into the training set, causing the model to produce specific outputs when encountering certain triggers. This type of attack is known as label poisoning or backdoor poisoning. Another form of data poisoning involves modifying a significant portion of the training data to influence the model’s learning process. This can entail injecting biased or false information into the training corpus, skewing the model’s outputs. Model inversion attacks, although not strictly poisoning attacks, exploit the model’s responses to infer sensitive information about its training data, which can be used in conjunction with other methods to refine poisoning strategies. Stealth attacks involve strategically manipulating the training data to create hard-to-detect vulnerabilities that can be exploited after deployment, preserving the model’s overall performance while introducing specific weaknesses. The process of poisoning an LLM typically involves several steps. Attackers first gather or generate a set of malicious training samples. For backdoor attacks, a trigger (such as a specific phrase or pattern) is crafted to activate the poisoned behavior. The poisoned samples are then introduced into the training dataset, either during initial training or fine-tuning. The LLM is trained or fine-tuned on the contaminated dataset, incorporating the malicious patterns. Once deployed, the poisoned model can be exploited by inputting the trigger or leveraging the introduced vulnerabilities. Detecting data poisoning in LLMs presents several challenges. The scale of training data for LLMs is massive, making comprehensive inspection impractical. The complexity of these models adds another layer of difficulty, as it is challenging to trace the impact of individual training samples. Advanced poisoning methods can be designed to evade detection by maintaining overall model performance, while the “black box” nature of deep learning models complicates efforts to identify anomalous behaviors. 4. Global Security Implications of Shadow AI and Data Poisoning Copyright 2024 CyberResilience The combination of shadow AI and data poisoning poses significant risks to global security across various domains. Imagine a scenario where poisoned LLMs deployed through shadow AI channels generate vast amounts of coherent, persuasive misinformation. Research by Zellers et al. (2019) demonstrated how GPT-2, a precursor to more advanced models, could generate fake news articles that humans found convincing. Such capabilities could undermine democratic processes through targeted disinformation, erode public trust in institutions and media, and exacerbate social and political divisions. As AI systems become integrated into critical infrastructure, shadow AI and data poisoning could lead to subtle manipulations with potentially catastrophic consequences. A study by Kang et al. (2021) explored the potential impact of AI-driven attacks on power grids, highlighting the need for robust security measures. Disruption of energy distribution systems, compromise of transportation networks, and interference with financial markets and trading systems are among the potential impacts. In the realm of national security and intelligence, the use of compromised LLMs in intelligence analysis could lead to flawed strategic assessments and policy decisions based on manipulated information. A report by the RAND Corporation (2020) emphasized the potential for AI to transform intelligence analysis, underscoring the importance of securing these systems. Misallocation of defense resources based on false intelligence, erosion of diplomatic relations due to AI-generated misunderstandings, and vulnerability of classified information to extraction through poisoned models are critical concerns. Shadow AI practices can inadvertently expose sensitive data to unauthorized AI systems, while data poisoning can create new attack vectors for cybercriminals. Increased risk of data breaches and intellectual property theft, exploitation of AI vulnerabilities for network intrusions, and compromise of personal privacy through model inversion attacks are potential outcomes. The financial sector’s increasing reliance on AI for trading, risk assessment, and fraud detection makes it particularly vulnerable to shadow AI and data poisoning threats. Market manipulation through poisoned trading algorithms, erosion of trust in financial institutions due to AI-driven errors, and potential for large-scale economic disruptions are significant risks. 5. Challenges in Detecting and Mitigating Shadow AI and Data Poisoning Addressing the threats posed by shadow AI and data poisoning presents numerous challenges. The sheer size and complexity of modern LLMs make comprehensive security audits computationally intensive and time-consuming. For instance, GPT-3, one of the largest language models, has 175 billion parameters, making it extremely challenging to analyze thoroughly. This difficulty in identifying all potential vulnerabilities, coupled with high computational costs for security assessments and challenges in real-time monitoring of model behaviors, underscores the scale of the problem. The lack of interpretability in deep neural networks, often referred to as the “black box” problem, makes it challenging to trace decision-making processes and identify anomalous behaviors. This difficulty in distinguishing between legitimate model improvements and malicious alterations, explaining model decisions for regulatory compliance, and identifying the source and extent of data poisoning adds another layer of complexity. The rapid evolution of AI technologies often outpaces the creation of governance frameworks and security measures. This constant need to update security protocols and best practices, coupled with challenges in developing standardized security measures across different AI architectures and maintaining up-to-date expertise among security professionals, highlights the dynamic nature of the threat landscape.The vast amounts of data used to train LLMs make it challenging to vet and validate all sources, increasing the risk of incorporating poisoned data. The impracticality of manual data inspection, difficulty in establishing provenance for all training data, and challenges in maintaining data quality while preserving diversity further complicate the situation.Organizations face the challenge of fostering AI innovation while maintaining robust security measures, often leading to tensions between development teams and security departments. The risk of stifling innovation through overly restrictive security policies, potential for shadow AI adoption as a workaround to security measures, and the need for cultural shifts to integrate security into the AI development process are critical considerations. 6. Mitigation Strategies and Best Practices To address the risks associated with shadow AI and data poisoning, organizations should consider implementing a comprehensive set of mitigation strategies. Establishing clear guidelines for AI deployment and usage within the organization is crucial. This includes creating processes for requesting and approving AI projects, defining roles and responsibilities for AI oversight, establishing ethical guidelines for AI development and use, and implementing regular policy reviews to keep pace with technological advancements. Creating a designated team responsible for overseeing AI projects can help ensure compliance with security and privacy policies. This team should review and approve AI initiatives across the organization, conduct risk assessments for proposed AI deployments, monitor ongoing AI projects for potential security issues, and serve as a central point of expertise for AI-related questions and concerns. Implementing robust data validation techniques is essential to mitigate the risk of data poisoning. This includes conducting statistical analysis to identify anomalies in training data, implementing anomaly detection algorithms to flag suspicious data points, using clustering techniques to identify and isolate potentially malicious samples, and establishing clear data provenance while maintaining detailed records of data sources. Performing ongoing evaluations to identify unauthorized AI deployments and potential vulnerabilities is crucial. This involves conducting network scans to detect unauthorized AI tools and services, performing penetration testing on AI systems to identify vulnerabilities, analyzing model outputs for signs of poisoning or unexpected behaviors, and reviewing access logs and user activities related to AI systems. Educating employees about the risks associated with shadow AI and the importance of following organizational protocols for AI usage is vital. Training programs should cover the potential risks and consequences of unauthorized AI use, proper procedures for requesting and implementing AI solutions, best practices for data handling and privacy protection, and recognition of potential signs of data poisoning or model compromise. Using identity and access management solutions to restrict access to AI tools and platforms based on user roles and responsibilities can help prevent unauthorized use. This includes implementing multi-factor authentication for AI system access, using role-based access control (RBAC) to limit system privileges, monitoring and logging all interactions with AI systems, and implementing data loss prevention (DLP) tools to protect sensitive information. Creating sophisticated tools to analyze internal representations and decision processes of LLMs is crucial for detecting potential compromises. This involves leveraging techniques from explainable AI research to improve model interpretability, developing methods for visualizing and analyzing neural network activations, creating tools for comparing model behaviors across different versions and training runs, and implementing continuous monitoring systems to detect anomalous model outputs. Developing global standards for AI development and deployment, including certification processes for AI systems used in critical applications, is essential for addressing the global nature of AI threats. Participating in international forums and working groups on AI security, collaborating with academic institutions and research organizations, sharing threat intelligence and best practices across borders, and advocating for harmonized regulatory frameworks for AI governance are key steps. 7. Ethical and Legal Considerations The rise of shadow AI and the threat of data poisoning raise complex ethical and legal questions that organizations must address. Determining responsibility for the actions of AI systems with hidden capabilities is challenging, particularly when the line between developer intent and emergent behavior is blurred. Establishing clear lines of responsibility for AI system outputs, developing frameworks for assessing liability in cases of AI-related harm, and considering the role of insurance in mitigating risks associated with AI deployment are critical considerations. Balancing the need for transparency in AI development with concerns about data privacy and intellectual property protection is crucial. Compliance with data protection regulations such as GDPR and CCPA, ethical use of personal data in AI training and deployment, and protecting proprietary algorithms and model architectures while ensuring transparency are key aspects to consider. Evolving ethical guidelines for AI research and development to address the unique challenges posed by potential shadow capabilities is necessary. Developing codes of conduct for AI researchers and developers, implementing ethics review boards for AI projects, and considering the long-term societal impacts of AI technologies are essential steps.8. Future Horizons: Emerging Technologies and Long-term Implications As we look to the future, several emerging technologies and trends will shape the landscape of AI security. Exploring the potential of quantum algorithms for more robust AI security testing and potentially quantum-resistant AI architectures is an active area of research. Quantum-enhanced encryption for AI model protection, quantum algorithms for faster and more comprehensive security audits, and the development of quantum-resistant AI architectures are potential developments. Investigating brain-inspired computing architectures that might offer inherent protections against certain types of attacks or provide new insights into creating more interpretable AI systems is promising. AI systems with improved resilience to adversarial attacks, more efficient and interpretable AI models inspired by biological neural networks, and novel approaches to anomaly detection based on neuromorphic principles are potential developments. Considering how current security challenges might evolve in the context of more advanced AI systems approaching artificial general intelligence is crucial. The work of Bostrom (2014) on superintelligence provides a framework for considering long-term AI safety. Increased complexity in securing systems with human-level or superhuman capabilities, ethical considerations surrounding the rights and responsibilities of AGI systems, and the potential for rapid and unpredictable advancements in AI capabilities are significant implications. Conclusion: Navigating the Perils of Shadow AI and Data Poisoning in a Hyper-Connected WorldThe advent of shadow AI and the insidious threat of data poisoning in Large Language Models (LLMs) represent more than just technical challenges—they signify profound risks to global security, economic stability, and societal trust. In a world increasingly reliant on AI-driven decisions, the unchecked proliferation of shadow AI can undermine the very foundations of organizational integrity and operational security. Meanwhile, the specter of data poisoning looms large, threatening to compromise not just individual models but the ecosystems that depend on their reliability. Consider the ramifications: poisoned LLMs could generate sophisticated misinformation campaigns, destabilize critical infrastructure, and corrupt national security intelligence. These aren’t abstract risks — they are present and escalating dangers that require immediate and concerted action. The impact on democratic processes, public trust, and economic stability could be devastating, with consequences reverberating across the globe. Organizations must recognize that the fight against shadow AI and data poisoning is not just an IT issue — it is a strategic imperative that demands attention at the highest levels of leadership. Implementing robust AI governance policies, investing in advanced detection and mitigation technologies, and fostering a culture of security and compliance are essential steps. The need for centralized oversight, rigorous data validation, and continuous monitoring cannot be overstated. Moreover, the ethical and legal dimensions of AI usage must be addressed head-on. Establishing clear accountability for AI systems, ensuring compliance with data protection regulations, and developing ethical guidelines for AI development are crucial for maintaining public trust and safeguarding privacy. The path forward requires a global effort. International cooperation in developing and enforcing AI security standards, sharing best practices, and collaborating on threat intelligence is vital. The stakes are too high for a fragmented approach; a unified, proactive stance is necessary to mitigate these risks effectively. As we look to the future, emerging technologies such as quantum computing and neuromorphic architectures offer promising avenues for enhancing AI security. However, these advancements must be pursued with a vigilant eye toward potential new vulnerabilities. The journey towards artificial general intelligence (AGI) will only amplify these challenges, making it imperative to embed security and ethical considerations into the very fabric of AI research and development. In conclusion, navigating the perils of shadow AI and data poisoning requires a multifaceted strategy that blends technological innovation with rigorous governance, ethical stewardship, and international collaboration. The time to act is now — before the unseen threats of shadow AI and data poisoning erode the pillars of our interconnected world. By taking decisive steps today, we can safeguard the promise of AI and ensure it remains a force for good in our society. Citations:[1] https://www.wiz.io/academy/shadow-ai [2] https://securiti.ai/blog/ai-governance-for-shadow-ai/ [3] https://arxiv.org/abs/2404.14795 [4] https://blog.barracuda.com/2024/06/25/shadow-AI-what-it-is-its-risks-how-it-can-be-limited Here is a list of the academic sources referenced in the article: About the Author Igor van Gemert is a prominent figure in the field of cybersecurity and disruptive technologies, with over 15 years of experience in IT and OT security domains. As a Singularity University alumnus, he is well-versed in the latest developments in emerging technologies and has a keen interest in their practical applications.Apart from his expertise in cybersecurity, van Gemert is also known for his experience in building start-ups and advising board members on innovation management and cybersecurity resilience. His ability to combine technical knowledge with business acumen has made him a sought-after speaker, writer, and teacher in his field.Overall, van Gemert’s multidisciplinary background and extensive experience in the field of cybersecurity and disruptive technologies make him a valuable asset to the industry, providing insights and guidance on navigating the rapidly evolving technological landscape.
by Alexander Sokolov, Mikhail SalazkinPublished January 25, 2009 Abstract Year 2008 saw a large-scale population survey of nine European countries, including Russia, aimed at identifying the perceptions of Europeans about their future. The survey covered the most important aspects of human life: work, family, environment, integration, security, consumption, education, relationships between rich and poor. The survey results for each participant country on all topics are compared and interpreted. The analysis revealed a quite complete but contradictory picture of Europeans’ and Russians’ future visions of the Europeans and Russians, their aspirations and preparedness for future events. The responses were dominated by skeptical views. Causes for such an attitude require a more in-depth analysis. Monitoring the evolution of public opinion on the future requires the implementation of similar surveys on a regular basis. Read here Published in Foresight Russia Foresight Russia is a high quality peer reviewed journal indexed in Scopus (already ranked in Q3), GoogleScholar, SSRN, RePEc, EBSCO, etc. The journal supports the dissemination of science and innovation studies, work on Foresight and science and technology policies. It also provides a framework for discussion of S&T and innovation trends and policies. We invite authors of original research papers to examine the limits and potentialities for foresight exercises as an imperative to innovation, and their relevance, interactive effects, and contribution to science, technology and innovation policy. Empirical, qualitative, and quantitative contributions are highly welcome. The jounal is published by the National Research University – HIgher School of Economics 4 times a year in Russian (paper version) and English (electronic version)
with Arnab Bishnu Chowdhury & Felix Bopp “Know Your Rhythm” is a training programme and network that helps participants discover their own sense of Rhythm in life and work, creating ‘conditions’ to experience Aha! Moment, raising well-being, wellness, empathy, teamwork, leadership. Know Your Rhythm is founded by Arnab Bishnu Chowdhury who is based out of Sri Aurobindo Ashram and Auroville, India. Arnab is inspired by Integral Yoga, founded by Sri Aurobindo and Mirra Alfassa (The Mother). Arnab is an Indian composer – trainer – therapist – researcher, 3rd generation from a family of Indian Classical musicians from Senia Maihar Gharana founded by Baba Allaudin Khan, master-teacher to sitar maestro Ravi Shankar. Arnab’s eclectic music inspires itself from the healing properties of ancient Indian classical music, harmony from Western classical, sound design from electronic music and AI. His therapeutic music has been tested by doctors in clinical settings, healed patients and seekers, choreographed in the form of innovative ballet. Moderator Credits Arnab Bishnu ChowdhuryComposer / Educator / Therapist / Explorer of ConsciousnessPuducherry, Indiawww.ninad.in ModeratorFelix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdamclubofamsterdam.com The Future Now Showclubofamsterdam.com/the-future-now-show You can find The Future Now Show also atLinkedIn: The Future Now Show GroupYouTube: The Future Now Show Channel
Club of Amsterdam Journal, March 2025, Issue 272 HomeJournals ArchiveJournals – Main TopicsClimate Change Success StoriesThe Future Now Shows Club of Amsterdam SearchSubmit your articleContactDonateSubscribe CONTENT Lead ArticleNuclear fusion could one day be a viable clean energy source – but big engineering challenges stand in the wayby George R. Tynan and Farhat Beg, University of California San Diego Article 01Energy Democracy: Building a Green, Resilient Future through Public and Community Ownership by Demos, New York The Future Now Show Energy Diplomacieswith Adriaan Kamp & Patrick Crehan Article 02 Profitable solutions for the planet, with Dr. Bertrand Piccard by Inside Ideas News about the Future> E-Skimo> RebuiLT project shows it’s possible to build differently Article 03Academic Reflection: Analysis of the O1 Chess Environment Exploitation Incident by Igor van Gemert, CEO focusing on cyber security solutions and business continuity Recommended BookNot the End of the World: How We Can Be the First Generation to Build a Sustainable Planetby Hannah Ritchie Article 04People and Planet Health : The Role of Healthcare Professionals in Climate Actionby BupaSolutions for the PlanetHealthcareHealthcare Corporate ExamplesHealthcare Holistic, Naturebased Examples Futurist PortraitHenrik von ScheelAuthored the 4th Industrial Revolution Tags4th Industrial Revolution, ARCHITECTURE, Artificial Intelligence, Chess, Club of Rome, DEMOCRACY, Diplomacy, ECONOMY, ENERGY, Healthcare, Heritage Foundation, Nuclear Fusion, Oil Industry, Ski, Sustainability, UN Welcome Felix B BoppProducer, The Future Now ShowFounder & Publisher, Club of Amsterdam Website statistics forclubofamsterdam.comFebruary 2025: 2025 visits 162,200 visitors 50,800 2024 visits 553,500 visitors 181,000 For Insiders Our website is enjoying a fantastic group of visitors and we get a lot of friendly comments that support us. THANK YOU we appreciate it! We would like to maintain the quality of our service, but need your support.Please support us and donate: And:Please have a look at the guidelines for posting comments. Quotes Hannah Ritchie: “In 2010 I started my degree in Environmental Geoscience at the University of Edinburgh. I showed up as a fresh-faced 16-year-old, ready to learn how we were going to fix some of the world’s biggest challenges. Four years later, I left with no solutions. Instead, I felt the deadweight of endless unsolvable problems. Each day at Edinburgh was a constant reminder of how humanity was ravaging the planet.” Henrik von Scheel: “The more resistant we are to change, the harder it will be for us to adapt. The people that don’t survive are the people resistant to change.” Adriaan Kamp: “We want to deepen our understanding, inspire more open, creative and richer conversations, advance our learning, and more smartly organize and enable a true reform- of what today is becoming so clearly broken.”